00:00:00.002 Started by upstream project "autotest-nightly" build number 4128 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3490 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.174 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.174 The recommended git tool is: git 00:00:00.175 using credential 00000000-0000-0000-0000-000000000002 00:00:00.176 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.223 Fetching changes from the remote Git repository 00:00:00.225 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.267 Using shallow fetch with depth 1 00:00:00.267 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.267 > git --version # timeout=10 00:00:00.304 > git --version # 'git version 2.39.2' 00:00:00.304 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.320 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.320 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.509 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.522 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.536 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:08.536 > git config core.sparsecheckout # timeout=10 00:00:08.547 > git read-tree -mu HEAD # timeout=10 00:00:08.567 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:08.584 Commit message: "kid: add issue 3541" 00:00:08.585 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:08.661 [Pipeline] Start of Pipeline 00:00:08.740 [Pipeline] library 00:00:08.741 Loading library shm_lib@master 00:00:08.741 Library shm_lib@master is cached. Copying from home. 00:00:08.756 [Pipeline] node 00:00:08.768 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.769 [Pipeline] { 00:00:08.776 [Pipeline] catchError 00:00:08.777 [Pipeline] { 00:00:08.785 [Pipeline] wrap 00:00:08.791 [Pipeline] { 00:00:08.796 [Pipeline] stage 00:00:08.797 [Pipeline] { (Prologue) 00:00:08.808 [Pipeline] echo 00:00:08.808 Node: VM-host-SM38 00:00:08.813 [Pipeline] cleanWs 00:00:08.821 [WS-CLEANUP] Deleting project workspace... 00:00:08.821 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.826 [WS-CLEANUP] done 00:00:08.995 [Pipeline] setCustomBuildProperty 00:00:09.076 [Pipeline] httpRequest 00:00:09.416 [Pipeline] echo 00:00:09.417 Sorcerer 10.211.164.101 is alive 00:00:09.423 [Pipeline] retry 00:00:09.424 [Pipeline] { 00:00:09.437 [Pipeline] httpRequest 00:00:09.442 HttpMethod: GET 00:00:09.443 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:09.443 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:09.444 Response Code: HTTP/1.1 200 OK 00:00:09.445 Success: Status code 200 is in the accepted range: 200,404 00:00:09.445 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:10.498 [Pipeline] } 00:00:10.513 [Pipeline] // retry 00:00:10.519 [Pipeline] sh 00:00:10.807 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:10.825 [Pipeline] httpRequest 00:00:11.197 [Pipeline] echo 00:00:11.199 Sorcerer 10.211.164.101 is alive 00:00:11.208 [Pipeline] retry 00:00:11.209 [Pipeline] { 00:00:11.222 [Pipeline] httpRequest 00:00:11.227 HttpMethod: GET 00:00:11.228 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:11.228 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:11.251 Response Code: HTTP/1.1 200 OK 00:00:11.252 Success: Status code 200 is in the accepted range: 200,404 00:00:11.252 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:53.437 [Pipeline] } 00:00:53.456 [Pipeline] // retry 00:00:53.465 [Pipeline] sh 00:00:53.753 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:57.069 [Pipeline] sh 00:00:57.348 + git -C spdk log --oneline -n5 00:00:57.348 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:57.348 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:57.348 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:57.348 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:57.348 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:57.367 [Pipeline] writeFile 00:00:57.383 [Pipeline] sh 00:00:57.663 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:57.675 [Pipeline] sh 00:00:57.955 + cat autorun-spdk.conf 00:00:57.956 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.956 SPDK_TEST_NVME=1 00:00:57.956 SPDK_TEST_FTL=1 00:00:57.956 SPDK_TEST_ISAL=1 00:00:57.956 SPDK_RUN_ASAN=1 00:00:57.956 SPDK_RUN_UBSAN=1 00:00:57.956 SPDK_TEST_XNVME=1 00:00:57.956 SPDK_TEST_NVME_FDP=1 00:00:57.956 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.965 RUN_NIGHTLY=1 00:00:57.967 [Pipeline] } 00:00:57.982 [Pipeline] // stage 00:00:57.997 [Pipeline] stage 00:00:58.000 [Pipeline] { (Run VM) 00:00:58.014 [Pipeline] sh 00:00:58.300 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:58.300 + echo 'Start stage prepare_nvme.sh' 00:00:58.300 Start stage prepare_nvme.sh 00:00:58.300 + [[ -n 6 ]] 00:00:58.300 + disk_prefix=ex6 00:00:58.300 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:58.300 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:58.300 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:58.300 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.300 ++ SPDK_TEST_NVME=1 00:00:58.300 ++ SPDK_TEST_FTL=1 00:00:58.300 ++ SPDK_TEST_ISAL=1 00:00:58.300 ++ SPDK_RUN_ASAN=1 00:00:58.300 ++ SPDK_RUN_UBSAN=1 00:00:58.300 ++ SPDK_TEST_XNVME=1 00:00:58.300 ++ SPDK_TEST_NVME_FDP=1 00:00:58.300 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.300 ++ RUN_NIGHTLY=1 00:00:58.300 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:58.300 + nvme_files=() 00:00:58.300 + declare -A nvme_files 00:00:58.300 + backend_dir=/var/lib/libvirt/images/backends 00:00:58.300 + nvme_files['nvme.img']=5G 00:00:58.300 + nvme_files['nvme-cmb.img']=5G 00:00:58.300 + nvme_files['nvme-multi0.img']=4G 00:00:58.300 + nvme_files['nvme-multi1.img']=4G 00:00:58.300 + nvme_files['nvme-multi2.img']=4G 00:00:58.300 + nvme_files['nvme-openstack.img']=8G 00:00:58.300 + nvme_files['nvme-zns.img']=5G 00:00:58.300 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:58.300 + (( SPDK_TEST_FTL == 1 )) 00:00:58.300 + nvme_files["nvme-ftl.img"]=6G 00:00:58.300 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:58.300 + nvme_files["nvme-fdp.img"]=1G 00:00:58.300 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:58.300 + for nvme in "${!nvme_files[@]}" 00:00:58.300 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:58.300 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.300 + for nvme in "${!nvme_files[@]}" 00:00:58.300 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:00:58.300 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:58.300 + for nvme in "${!nvme_files[@]}" 00:00:58.300 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:58.300 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.300 + for nvme in "${!nvme_files[@]}" 00:00:58.300 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:58.300 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:58.300 + for nvme in "${!nvme_files[@]}" 00:00:58.300 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:58.300 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.300 + for nvme in "${!nvme_files[@]}" 00:00:58.300 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:58.562 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.562 + for nvme in "${!nvme_files[@]}" 00:00:58.562 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:58.562 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.562 + for nvme in "${!nvme_files[@]}" 00:00:58.562 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:00:58.562 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:58.562 + for nvme in "${!nvme_files[@]}" 00:00:58.562 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:58.823 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.823 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:58.823 + echo 'End stage prepare_nvme.sh' 00:00:58.823 End stage prepare_nvme.sh 00:00:58.836 [Pipeline] sh 00:00:59.121 + DISTRO=fedora39 00:00:59.121 + CPUS=10 00:00:59.121 + RAM=12288 00:00:59.121 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:59.121 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:59.121 00:00:59.121 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:59.121 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:59.121 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:59.121 HELP=0 00:00:59.121 DRY_RUN=0 00:00:59.121 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:00:59.121 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:59.121 NVME_AUTO_CREATE=0 00:00:59.121 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:00:59.121 NVME_CMB=,,,, 00:00:59.121 NVME_PMR=,,,, 00:00:59.121 NVME_ZNS=,,,, 00:00:59.121 NVME_MS=true,,,, 00:00:59.121 NVME_FDP=,,,on, 00:00:59.121 SPDK_VAGRANT_DISTRO=fedora39 00:00:59.121 SPDK_VAGRANT_VMCPU=10 00:00:59.121 SPDK_VAGRANT_VMRAM=12288 00:00:59.121 SPDK_VAGRANT_PROVIDER=libvirt 00:00:59.121 SPDK_VAGRANT_HTTP_PROXY= 00:00:59.121 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:59.121 SPDK_OPENSTACK_NETWORK=0 00:00:59.121 VAGRANT_PACKAGE_BOX=0 00:00:59.121 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:59.121 FORCE_DISTRO=true 00:00:59.121 VAGRANT_BOX_VERSION= 00:00:59.121 EXTRA_VAGRANTFILES= 00:00:59.121 NIC_MODEL=e1000 00:00:59.121 00:00:59.121 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:59.121 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:01.663 Bringing machine 'default' up with 'libvirt' provider... 00:01:01.923 ==> default: Creating image (snapshot of base box volume). 00:01:01.923 ==> default: Creating domain with the following settings... 00:01:01.923 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727565709_0f0f38126ba3191c0eba 00:01:01.923 ==> default: -- Domain type: kvm 00:01:01.923 ==> default: -- Cpus: 10 00:01:01.923 ==> default: -- Feature: acpi 00:01:01.923 ==> default: -- Feature: apic 00:01:01.923 ==> default: -- Feature: pae 00:01:01.923 ==> default: -- Memory: 12288M 00:01:01.923 ==> default: -- Memory Backing: hugepages: 00:01:01.923 ==> default: -- Management MAC: 00:01:01.923 ==> default: -- Loader: 00:01:01.923 ==> default: -- Nvram: 00:01:01.923 ==> default: -- Base box: spdk/fedora39 00:01:01.923 ==> default: -- Storage pool: default 00:01:01.923 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727565709_0f0f38126ba3191c0eba.img (20G) 00:01:01.923 ==> default: -- Volume Cache: default 00:01:01.923 ==> default: -- Kernel: 00:01:01.923 ==> default: -- Initrd: 00:01:01.923 ==> default: -- Graphics Type: vnc 00:01:01.923 ==> default: -- Graphics Port: -1 00:01:01.923 ==> default: -- Graphics IP: 127.0.0.1 00:01:01.923 ==> default: -- Graphics Password: Not defined 00:01:01.923 ==> default: -- Video Type: cirrus 00:01:01.923 ==> default: -- Video VRAM: 9216 00:01:01.923 ==> default: -- Sound Type: 00:01:01.923 ==> default: -- Keymap: en-us 00:01:01.923 ==> default: -- TPM Path: 00:01:01.923 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:01.923 ==> default: -- Command line args: 00:01:01.923 ==> default: -> value=-device, 00:01:01.923 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:01.923 ==> default: -> value=-drive, 00:01:01.923 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:01.923 ==> default: -> value=-device, 00:01:01.923 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:01.923 ==> default: -> value=-device, 00:01:01.923 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:01.923 ==> default: -> value=-drive, 00:01:01.923 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:01.923 ==> default: -> value=-device, 00:01:01.923 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.923 ==> default: -> value=-device, 00:01:01.923 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:01.923 ==> default: -> value=-drive, 00:01:01.923 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:01.923 ==> default: -> value=-device, 00:01:01.923 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.923 ==> default: -> value=-drive, 00:01:01.924 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:01.924 ==> default: -> value=-device, 00:01:01.924 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.924 ==> default: -> value=-drive, 00:01:01.924 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:01.924 ==> default: -> value=-device, 00:01:01.924 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.924 ==> default: -> value=-device, 00:01:01.924 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:01.924 ==> default: -> value=-device, 00:01:01.924 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:01.924 ==> default: -> value=-drive, 00:01:01.924 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:01.924 ==> default: -> value=-device, 00:01:01.924 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.184 ==> default: Creating shared folders metadata... 00:01:02.184 ==> default: Starting domain. 00:01:03.560 ==> default: Waiting for domain to get an IP address... 00:01:21.674 ==> default: Waiting for SSH to become available... 00:01:21.674 ==> default: Configuring and enabling network interfaces... 00:01:25.874 default: SSH address: 192.168.121.83:22 00:01:25.874 default: SSH username: vagrant 00:01:25.874 default: SSH auth method: private key 00:01:27.786 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:35.932 ==> default: Mounting SSHFS shared folder... 00:01:37.848 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:37.849 ==> default: Checking Mount.. 00:01:38.794 ==> default: Folder Successfully Mounted! 00:01:38.794 00:01:38.794 SUCCESS! 00:01:38.794 00:01:38.794 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:38.794 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:38.794 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:38.794 00:01:38.804 [Pipeline] } 00:01:38.818 [Pipeline] // stage 00:01:38.827 [Pipeline] dir 00:01:38.827 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:38.829 [Pipeline] { 00:01:38.840 [Pipeline] catchError 00:01:38.842 [Pipeline] { 00:01:38.851 [Pipeline] sh 00:01:39.134 + vagrant ssh-config --host vagrant 00:01:39.134 + sed -ne '/^Host/,$p' 00:01:39.134 + tee ssh_conf 00:01:42.449 Host vagrant 00:01:42.449 HostName 192.168.121.83 00:01:42.449 User vagrant 00:01:42.449 Port 22 00:01:42.449 UserKnownHostsFile /dev/null 00:01:42.449 StrictHostKeyChecking no 00:01:42.449 PasswordAuthentication no 00:01:42.449 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:42.449 IdentitiesOnly yes 00:01:42.449 LogLevel FATAL 00:01:42.449 ForwardAgent yes 00:01:42.449 ForwardX11 yes 00:01:42.449 00:01:42.464 [Pipeline] withEnv 00:01:42.466 [Pipeline] { 00:01:42.479 [Pipeline] sh 00:01:42.765 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:42.765 source /etc/os-release 00:01:42.765 [[ -e /image.version ]] && img=$(< /image.version) 00:01:42.765 # Minimal, systemd-like check. 00:01:42.765 if [[ -e /.dockerenv ]]; then 00:01:42.765 # Clear garbage from the node'\''s name: 00:01:42.765 # agt-er_autotest_547-896 -> autotest_547-896 00:01:42.765 # $HOSTNAME is the actual container id 00:01:42.765 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:42.765 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:42.765 # We can assume this is a mount from a host where container is running, 00:01:42.765 # so fetch its hostname to easily identify the target swarm worker. 00:01:42.765 container="$(< /etc/hostname) ($agent)" 00:01:42.765 else 00:01:42.765 # Fallback 00:01:42.765 container=$agent 00:01:42.765 fi 00:01:42.765 fi 00:01:42.765 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:42.765 ' 00:01:43.040 [Pipeline] } 00:01:43.056 [Pipeline] // withEnv 00:01:43.064 [Pipeline] setCustomBuildProperty 00:01:43.079 [Pipeline] stage 00:01:43.081 [Pipeline] { (Tests) 00:01:43.099 [Pipeline] sh 00:01:43.387 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:43.667 [Pipeline] sh 00:01:43.956 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:44.235 [Pipeline] timeout 00:01:44.236 Timeout set to expire in 50 min 00:01:44.238 [Pipeline] { 00:01:44.251 [Pipeline] sh 00:01:44.535 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:45.107 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:45.121 [Pipeline] sh 00:01:45.404 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:45.680 [Pipeline] sh 00:01:45.964 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:46.241 [Pipeline] sh 00:01:46.524 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:46.784 ++ readlink -f spdk_repo 00:01:46.784 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:46.784 + [[ -n /home/vagrant/spdk_repo ]] 00:01:46.784 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:46.784 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:46.784 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:46.784 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:46.784 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:46.784 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:46.784 + cd /home/vagrant/spdk_repo 00:01:46.784 + source /etc/os-release 00:01:46.784 ++ NAME='Fedora Linux' 00:01:46.784 ++ VERSION='39 (Cloud Edition)' 00:01:46.784 ++ ID=fedora 00:01:46.784 ++ VERSION_ID=39 00:01:46.784 ++ VERSION_CODENAME= 00:01:46.784 ++ PLATFORM_ID=platform:f39 00:01:46.784 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:46.784 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:46.784 ++ LOGO=fedora-logo-icon 00:01:46.784 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:46.784 ++ HOME_URL=https://fedoraproject.org/ 00:01:46.784 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:46.784 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:46.784 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:46.784 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:46.784 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:46.784 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:46.784 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:46.784 ++ SUPPORT_END=2024-11-12 00:01:46.784 ++ VARIANT='Cloud Edition' 00:01:46.784 ++ VARIANT_ID=cloud 00:01:46.784 + uname -a 00:01:46.784 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:46.784 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:47.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:47.312 Hugepages 00:01:47.312 node hugesize free / total 00:01:47.312 node0 1048576kB 0 / 0 00:01:47.312 node0 2048kB 0 / 0 00:01:47.312 00:01:47.312 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:47.312 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:47.589 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:47.589 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:47.589 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:47.589 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:47.589 + rm -f /tmp/spdk-ld-path 00:01:47.589 + source autorun-spdk.conf 00:01:47.589 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.589 ++ SPDK_TEST_NVME=1 00:01:47.589 ++ SPDK_TEST_FTL=1 00:01:47.589 ++ SPDK_TEST_ISAL=1 00:01:47.589 ++ SPDK_RUN_ASAN=1 00:01:47.589 ++ SPDK_RUN_UBSAN=1 00:01:47.589 ++ SPDK_TEST_XNVME=1 00:01:47.589 ++ SPDK_TEST_NVME_FDP=1 00:01:47.589 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.589 ++ RUN_NIGHTLY=1 00:01:47.589 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:47.589 + [[ -n '' ]] 00:01:47.589 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:47.589 + for M in /var/spdk/build-*-manifest.txt 00:01:47.589 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:47.589 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.589 + for M in /var/spdk/build-*-manifest.txt 00:01:47.589 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:47.589 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.589 + for M in /var/spdk/build-*-manifest.txt 00:01:47.589 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:47.589 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.589 ++ uname 00:01:47.589 + [[ Linux == \L\i\n\u\x ]] 00:01:47.589 + sudo dmesg -T 00:01:47.589 + sudo dmesg --clear 00:01:47.589 + dmesg_pid=5043 00:01:47.589 + [[ Fedora Linux == FreeBSD ]] 00:01:47.589 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.589 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.589 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.589 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.589 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.589 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.589 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.589 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.589 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.589 + sudo dmesg -Tw 00:01:47.589 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.589 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.589 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.589 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.589 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.589 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.589 Test configuration: 00:01:47.589 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.589 SPDK_TEST_NVME=1 00:01:47.589 SPDK_TEST_FTL=1 00:01:47.589 SPDK_TEST_ISAL=1 00:01:47.589 SPDK_RUN_ASAN=1 00:01:47.589 SPDK_RUN_UBSAN=1 00:01:47.589 SPDK_TEST_XNVME=1 00:01:47.589 SPDK_TEST_NVME_FDP=1 00:01:47.589 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.850 RUN_NIGHTLY=1 23:22:35 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:47.850 23:22:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:47.850 23:22:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:47.850 23:22:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.850 23:22:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.850 23:22:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.850 23:22:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.850 23:22:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.850 23:22:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.850 23:22:35 -- paths/export.sh@5 -- $ export PATH 00:01:47.851 23:22:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.851 23:22:35 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:47.851 23:22:35 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:47.851 23:22:35 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727565755.XXXXXX 00:01:47.851 23:22:35 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727565755.kBH4Lu 00:01:47.851 23:22:35 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:47.851 23:22:35 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:01:47.851 23:22:35 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:47.851 23:22:35 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:47.851 23:22:35 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.851 23:22:35 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:47.851 23:22:35 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:47.851 23:22:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.851 23:22:35 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:47.851 23:22:35 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:47.851 23:22:35 -- pm/common@17 -- $ local monitor 00:01:47.851 23:22:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.851 23:22:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.851 23:22:35 -- pm/common@25 -- $ sleep 1 00:01:47.851 23:22:35 -- pm/common@21 -- $ date +%s 00:01:47.851 23:22:35 -- pm/common@21 -- $ date +%s 00:01:47.851 23:22:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727565755 00:01:47.851 23:22:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727565755 00:01:47.851 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727565755_collect-vmstat.pm.log 00:01:47.851 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727565755_collect-cpu-load.pm.log 00:01:48.794 23:22:36 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:48.794 23:22:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.794 23:22:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.794 23:22:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:48.794 23:22:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.794 Sat Sep 28 11:22:36 PM UTC 2024 00:01:48.794 23:22:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.794 v25.01-pre-17-g09cc66129 00:01:48.794 23:22:36 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:48.794 23:22:36 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:48.794 23:22:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:48.794 23:22:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:48.794 23:22:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.794 ************************************ 00:01:48.794 START TEST asan 00:01:48.794 ************************************ 00:01:48.794 using asan 00:01:48.794 23:22:36 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:48.794 00:01:48.794 real 0m0.000s 00:01:48.794 user 0m0.000s 00:01:48.794 sys 0m0.000s 00:01:48.794 ************************************ 00:01:48.794 END TEST asan 00:01:48.794 ************************************ 00:01:48.794 23:22:36 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:48.794 23:22:36 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.794 23:22:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.794 23:22:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.794 23:22:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:48.794 23:22:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:48.794 23:22:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.794 ************************************ 00:01:48.794 START TEST ubsan 00:01:48.794 ************************************ 00:01:48.794 using ubsan 00:01:48.794 23:22:36 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:48.794 00:01:48.794 real 0m0.000s 00:01:48.794 user 0m0.000s 00:01:48.794 sys 0m0.000s 00:01:48.794 23:22:36 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:48.794 ************************************ 00:01:48.794 END TEST ubsan 00:01:48.794 ************************************ 00:01:48.794 23:22:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.794 23:22:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:48.794 23:22:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:48.794 23:22:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:48.794 23:22:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:48.794 23:22:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:48.794 23:22:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:48.794 23:22:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:48.794 23:22:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:48.794 23:22:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:49.055 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:49.055 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:49.315 Using 'verbs' RDMA provider 00:02:02.503 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:12.522 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:13.094 Creating mk/config.mk...done. 00:02:13.094 Creating mk/cc.flags.mk...done. 00:02:13.094 Type 'make' to build. 00:02:13.094 23:23:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:13.094 23:23:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:13.094 23:23:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:13.094 23:23:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.094 ************************************ 00:02:13.094 START TEST make 00:02:13.094 ************************************ 00:02:13.094 23:23:01 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:13.356 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:13.356 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:13.356 meson setup builddir \ 00:02:13.356 -Dwith-libaio=enabled \ 00:02:13.356 -Dwith-liburing=enabled \ 00:02:13.356 -Dwith-libvfn=disabled \ 00:02:13.356 -Dwith-spdk=false && \ 00:02:13.356 meson compile -C builddir && \ 00:02:13.356 cd -) 00:02:13.356 make[1]: Nothing to be done for 'all'. 00:02:15.902 The Meson build system 00:02:15.902 Version: 1.5.0 00:02:15.902 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:15.902 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:15.902 Build type: native build 00:02:15.902 Project name: xnvme 00:02:15.902 Project version: 0.7.3 00:02:15.902 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:15.902 C linker for the host machine: cc ld.bfd 2.40-14 00:02:15.902 Host machine cpu family: x86_64 00:02:15.902 Host machine cpu: x86_64 00:02:15.902 Message: host_machine.system: linux 00:02:15.902 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:15.902 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:15.902 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:15.902 Run-time dependency threads found: YES 00:02:15.902 Has header "setupapi.h" : NO 00:02:15.902 Has header "linux/blkzoned.h" : YES 00:02:15.902 Has header "linux/blkzoned.h" : YES (cached) 00:02:15.902 Has header "libaio.h" : YES 00:02:15.902 Library aio found: YES 00:02:15.902 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.902 Run-time dependency liburing found: YES 2.2 00:02:15.902 Dependency libvfn skipped: feature with-libvfn disabled 00:02:15.902 Run-time dependency appleframeworks found: NO (tried framework) 00:02:15.902 Run-time dependency appleframeworks found: NO (tried framework) 00:02:15.902 Configuring xnvme_config.h using configuration 00:02:15.902 Configuring xnvme.spec using configuration 00:02:15.902 Run-time dependency bash-completion found: YES 2.11 00:02:15.902 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:15.902 Program cp found: YES (/usr/bin/cp) 00:02:15.902 Has header "winsock2.h" : NO 00:02:15.902 Has header "dbghelp.h" : NO 00:02:15.902 Library rpcrt4 found: NO 00:02:15.902 Library rt found: YES 00:02:15.902 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:15.902 Found CMake: /usr/bin/cmake (3.27.7) 00:02:15.902 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:15.902 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:15.902 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:15.902 Build targets in project: 32 00:02:15.902 00:02:15.902 xnvme 0.7.3 00:02:15.902 00:02:15.902 User defined options 00:02:15.902 with-libaio : enabled 00:02:15.902 with-liburing: enabled 00:02:15.902 with-libvfn : disabled 00:02:15.902 with-spdk : false 00:02:15.902 00:02:15.902 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.162 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:16.162 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:16.162 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:16.162 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:16.162 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:16.162 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:16.162 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:16.162 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:16.162 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:16.162 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:16.162 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:16.162 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:16.162 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:16.424 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:16.424 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:16.424 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:16.424 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:16.424 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:16.424 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:16.424 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:16.424 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:16.424 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:16.424 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:16.424 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:16.424 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:16.424 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:16.424 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:16.424 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:16.424 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:16.424 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:16.424 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:16.424 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:16.424 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:16.424 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:16.424 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:16.424 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:16.424 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:16.424 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:16.424 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:16.424 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:16.424 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:16.424 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:16.424 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:16.424 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:16.424 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:16.424 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:16.424 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:16.424 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:16.424 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:16.685 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:16.685 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:16.685 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:16.685 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:16.685 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:16.685 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:16.685 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:16.685 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:16.685 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:16.685 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:16.685 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:16.685 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:16.685 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:16.685 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:16.685 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:16.685 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:16.685 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:16.685 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:16.685 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:16.685 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:16.685 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:16.944 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:16.944 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:16.944 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:16.944 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:16.944 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:16.944 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:16.944 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:16.944 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:16.944 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:16.944 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:16.944 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:16.944 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:16.944 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:16.944 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:16.944 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:16.944 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:16.944 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:16.944 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:16.944 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:17.203 [89/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:17.203 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:17.203 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:17.203 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:17.203 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:17.203 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:17.203 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:17.203 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:17.203 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:17.203 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:17.203 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:17.203 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:17.203 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:17.203 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:17.203 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:17.203 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:17.203 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:17.203 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:17.203 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:17.203 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:17.203 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:17.203 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:17.203 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:17.203 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:17.203 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:17.203 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:17.203 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:17.203 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:17.203 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:17.203 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:17.203 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:17.203 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:17.464 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:17.464 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:17.464 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:17.464 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:17.464 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:17.464 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:17.464 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:17.464 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:17.464 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:17.464 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:17.464 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:17.464 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:17.464 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:17.465 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:17.465 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:17.465 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:17.465 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:17.465 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:17.465 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:17.465 [140/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:17.465 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:17.465 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:17.726 [143/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:17.726 [144/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:17.726 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:17.726 [146/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:17.726 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:17.726 [148/203] Linking target lib/libxnvme.so 00:02:17.726 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:17.726 [150/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:17.726 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:17.726 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:17.726 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:17.726 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:17.726 [155/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:17.726 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:17.726 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:17.726 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:17.726 [159/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:17.726 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:17.726 [161/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:17.726 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:17.726 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:17.986 [164/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:17.986 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:17.986 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:17.986 [167/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:17.986 [168/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:17.986 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:17.986 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:17.986 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:17.986 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:18.244 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:18.244 [174/203] Linking static target lib/libxnvme.a 00:02:18.244 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:18.244 [176/203] Linking target tests/xnvme_tests_cli 00:02:18.244 [177/203] Linking target tests/xnvme_tests_xnvme_file 00:02:18.244 [178/203] Linking target tests/xnvme_tests_znd_append 00:02:18.244 [179/203] Linking target tests/xnvme_tests_ioworker 00:02:18.244 [180/203] Linking target tests/xnvme_tests_buf 00:02:18.244 [181/203] Linking target tests/xnvme_tests_enum 00:02:18.244 [182/203] Linking target tests/xnvme_tests_scc 00:02:18.244 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:18.244 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:18.244 [185/203] Linking target tests/xnvme_tests_lblk 00:02:18.244 [186/203] Linking target tests/xnvme_tests_znd_state 00:02:18.244 [187/203] Linking target tests/xnvme_tests_kvs 00:02:18.244 [188/203] Linking target tests/xnvme_tests_map 00:02:18.244 [189/203] Linking target tools/xnvme 00:02:18.244 [190/203] Linking target tools/lblk 00:02:18.244 [191/203] Linking target tools/xnvme_file 00:02:18.244 [192/203] Linking target tools/xdd 00:02:18.244 [193/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:18.244 [194/203] Linking target tools/zoned 00:02:18.244 [195/203] Linking target tools/kvs 00:02:18.244 [196/203] Linking target examples/xnvme_dev 00:02:18.244 [197/203] Linking target examples/xnvme_io_async 00:02:18.244 [198/203] Linking target examples/xnvme_single_async 00:02:18.244 [199/203] Linking target examples/xnvme_single_sync 00:02:18.244 [200/203] Linking target examples/zoned_io_async 00:02:18.244 [201/203] Linking target examples/xnvme_enum 00:02:18.244 [202/203] Linking target examples/xnvme_hello 00:02:18.244 [203/203] Linking target examples/zoned_io_sync 00:02:18.244 INFO: autodetecting backend as ninja 00:02:18.244 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:18.244 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:23.513 The Meson build system 00:02:23.513 Version: 1.5.0 00:02:23.513 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:23.513 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:23.513 Build type: native build 00:02:23.513 Program cat found: YES (/usr/bin/cat) 00:02:23.513 Project name: DPDK 00:02:23.513 Project version: 24.03.0 00:02:23.513 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.513 C linker for the host machine: cc ld.bfd 2.40-14 00:02:23.513 Host machine cpu family: x86_64 00:02:23.513 Host machine cpu: x86_64 00:02:23.513 Message: ## Building in Developer Mode ## 00:02:23.513 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.513 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.513 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.513 Program python3 found: YES (/usr/bin/python3) 00:02:23.513 Program cat found: YES (/usr/bin/cat) 00:02:23.513 Compiler for C supports arguments -march=native: YES 00:02:23.513 Checking for size of "void *" : 8 00:02:23.513 Checking for size of "void *" : 8 (cached) 00:02:23.513 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:23.513 Library m found: YES 00:02:23.513 Library numa found: YES 00:02:23.513 Has header "numaif.h" : YES 00:02:23.513 Library fdt found: NO 00:02:23.513 Library execinfo found: NO 00:02:23.513 Has header "execinfo.h" : YES 00:02:23.513 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.513 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.513 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.513 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.513 Run-time dependency openssl found: YES 3.1.1 00:02:23.513 Run-time dependency libpcap found: YES 1.10.4 00:02:23.513 Has header "pcap.h" with dependency libpcap: YES 00:02:23.513 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.513 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.513 Compiler for C supports arguments -Wformat: YES 00:02:23.513 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.513 Compiler for C supports arguments -Wformat-security: NO 00:02:23.513 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.513 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.513 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.513 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.513 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.513 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.513 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.513 Compiler for C supports arguments -Wundef: YES 00:02:23.513 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.513 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.513 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.513 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.513 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.513 Program objdump found: YES (/usr/bin/objdump) 00:02:23.513 Compiler for C supports arguments -mavx512f: YES 00:02:23.513 Checking if "AVX512 checking" compiles: YES 00:02:23.513 Fetching value of define "__SSE4_2__" : 1 00:02:23.513 Fetching value of define "__AES__" : 1 00:02:23.513 Fetching value of define "__AVX__" : 1 00:02:23.513 Fetching value of define "__AVX2__" : 1 00:02:23.513 Fetching value of define "__AVX512BW__" : 1 00:02:23.513 Fetching value of define "__AVX512CD__" : 1 00:02:23.513 Fetching value of define "__AVX512DQ__" : 1 00:02:23.513 Fetching value of define "__AVX512F__" : 1 00:02:23.513 Fetching value of define "__AVX512VL__" : 1 00:02:23.513 Fetching value of define "__PCLMUL__" : 1 00:02:23.513 Fetching value of define "__RDRND__" : 1 00:02:23.513 Fetching value of define "__RDSEED__" : 1 00:02:23.513 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:23.513 Fetching value of define "__znver1__" : (undefined) 00:02:23.513 Fetching value of define "__znver2__" : (undefined) 00:02:23.513 Fetching value of define "__znver3__" : (undefined) 00:02:23.513 Fetching value of define "__znver4__" : (undefined) 00:02:23.513 Library asan found: YES 00:02:23.513 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.513 Message: lib/log: Defining dependency "log" 00:02:23.513 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.513 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.513 Library rt found: YES 00:02:23.513 Checking for function "getentropy" : NO 00:02:23.513 Message: lib/eal: Defining dependency "eal" 00:02:23.513 Message: lib/ring: Defining dependency "ring" 00:02:23.513 Message: lib/rcu: Defining dependency "rcu" 00:02:23.513 Message: lib/mempool: Defining dependency "mempool" 00:02:23.513 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.513 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.513 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.513 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.513 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.513 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.513 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:23.513 Compiler for C supports arguments -mpclmul: YES 00:02:23.513 Compiler for C supports arguments -maes: YES 00:02:23.513 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.513 Compiler for C supports arguments -mavx512bw: YES 00:02:23.513 Compiler for C supports arguments -mavx512dq: YES 00:02:23.513 Compiler for C supports arguments -mavx512vl: YES 00:02:23.513 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.513 Compiler for C supports arguments -mavx2: YES 00:02:23.513 Compiler for C supports arguments -mavx: YES 00:02:23.513 Message: lib/net: Defining dependency "net" 00:02:23.513 Message: lib/meter: Defining dependency "meter" 00:02:23.513 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.513 Message: lib/pci: Defining dependency "pci" 00:02:23.513 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.513 Message: lib/hash: Defining dependency "hash" 00:02:23.513 Message: lib/timer: Defining dependency "timer" 00:02:23.513 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.513 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.513 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.513 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.513 Message: lib/power: Defining dependency "power" 00:02:23.513 Message: lib/reorder: Defining dependency "reorder" 00:02:23.513 Message: lib/security: Defining dependency "security" 00:02:23.513 Has header "linux/userfaultfd.h" : YES 00:02:23.513 Has header "linux/vduse.h" : YES 00:02:23.513 Message: lib/vhost: Defining dependency "vhost" 00:02:23.513 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.513 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.513 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.513 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.513 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.513 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.513 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.513 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.513 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.513 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.513 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.513 Configuring doxy-api-html.conf using configuration 00:02:23.513 Configuring doxy-api-man.conf using configuration 00:02:23.513 Program mandb found: YES (/usr/bin/mandb) 00:02:23.513 Program sphinx-build found: NO 00:02:23.513 Configuring rte_build_config.h using configuration 00:02:23.513 Message: 00:02:23.513 ================= 00:02:23.513 Applications Enabled 00:02:23.513 ================= 00:02:23.513 00:02:23.513 apps: 00:02:23.513 00:02:23.513 00:02:23.513 Message: 00:02:23.513 ================= 00:02:23.513 Libraries Enabled 00:02:23.513 ================= 00:02:23.513 00:02:23.513 libs: 00:02:23.513 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.514 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.514 cryptodev, dmadev, power, reorder, security, vhost, 00:02:23.514 00:02:23.514 Message: 00:02:23.514 =============== 00:02:23.514 Drivers Enabled 00:02:23.514 =============== 00:02:23.514 00:02:23.514 common: 00:02:23.514 00:02:23.514 bus: 00:02:23.514 pci, vdev, 00:02:23.514 mempool: 00:02:23.514 ring, 00:02:23.514 dma: 00:02:23.514 00:02:23.514 net: 00:02:23.514 00:02:23.514 crypto: 00:02:23.514 00:02:23.514 compress: 00:02:23.514 00:02:23.514 vdpa: 00:02:23.514 00:02:23.514 00:02:23.514 Message: 00:02:23.514 ================= 00:02:23.514 Content Skipped 00:02:23.514 ================= 00:02:23.514 00:02:23.514 apps: 00:02:23.514 dumpcap: explicitly disabled via build config 00:02:23.514 graph: explicitly disabled via build config 00:02:23.514 pdump: explicitly disabled via build config 00:02:23.514 proc-info: explicitly disabled via build config 00:02:23.514 test-acl: explicitly disabled via build config 00:02:23.514 test-bbdev: explicitly disabled via build config 00:02:23.514 test-cmdline: explicitly disabled via build config 00:02:23.514 test-compress-perf: explicitly disabled via build config 00:02:23.514 test-crypto-perf: explicitly disabled via build config 00:02:23.514 test-dma-perf: explicitly disabled via build config 00:02:23.514 test-eventdev: explicitly disabled via build config 00:02:23.514 test-fib: explicitly disabled via build config 00:02:23.514 test-flow-perf: explicitly disabled via build config 00:02:23.514 test-gpudev: explicitly disabled via build config 00:02:23.514 test-mldev: explicitly disabled via build config 00:02:23.514 test-pipeline: explicitly disabled via build config 00:02:23.514 test-pmd: explicitly disabled via build config 00:02:23.514 test-regex: explicitly disabled via build config 00:02:23.514 test-sad: explicitly disabled via build config 00:02:23.514 test-security-perf: explicitly disabled via build config 00:02:23.514 00:02:23.514 libs: 00:02:23.514 argparse: explicitly disabled via build config 00:02:23.514 metrics: explicitly disabled via build config 00:02:23.514 acl: explicitly disabled via build config 00:02:23.514 bbdev: explicitly disabled via build config 00:02:23.514 bitratestats: explicitly disabled via build config 00:02:23.514 bpf: explicitly disabled via build config 00:02:23.514 cfgfile: explicitly disabled via build config 00:02:23.514 distributor: explicitly disabled via build config 00:02:23.514 efd: explicitly disabled via build config 00:02:23.514 eventdev: explicitly disabled via build config 00:02:23.514 dispatcher: explicitly disabled via build config 00:02:23.514 gpudev: explicitly disabled via build config 00:02:23.514 gro: explicitly disabled via build config 00:02:23.514 gso: explicitly disabled via build config 00:02:23.514 ip_frag: explicitly disabled via build config 00:02:23.514 jobstats: explicitly disabled via build config 00:02:23.514 latencystats: explicitly disabled via build config 00:02:23.514 lpm: explicitly disabled via build config 00:02:23.514 member: explicitly disabled via build config 00:02:23.514 pcapng: explicitly disabled via build config 00:02:23.514 rawdev: explicitly disabled via build config 00:02:23.514 regexdev: explicitly disabled via build config 00:02:23.514 mldev: explicitly disabled via build config 00:02:23.514 rib: explicitly disabled via build config 00:02:23.514 sched: explicitly disabled via build config 00:02:23.514 stack: explicitly disabled via build config 00:02:23.514 ipsec: explicitly disabled via build config 00:02:23.514 pdcp: explicitly disabled via build config 00:02:23.514 fib: explicitly disabled via build config 00:02:23.514 port: explicitly disabled via build config 00:02:23.514 pdump: explicitly disabled via build config 00:02:23.514 table: explicitly disabled via build config 00:02:23.514 pipeline: explicitly disabled via build config 00:02:23.514 graph: explicitly disabled via build config 00:02:23.514 node: explicitly disabled via build config 00:02:23.514 00:02:23.514 drivers: 00:02:23.514 common/cpt: not in enabled drivers build config 00:02:23.514 common/dpaax: not in enabled drivers build config 00:02:23.514 common/iavf: not in enabled drivers build config 00:02:23.514 common/idpf: not in enabled drivers build config 00:02:23.514 common/ionic: not in enabled drivers build config 00:02:23.514 common/mvep: not in enabled drivers build config 00:02:23.514 common/octeontx: not in enabled drivers build config 00:02:23.514 bus/auxiliary: not in enabled drivers build config 00:02:23.514 bus/cdx: not in enabled drivers build config 00:02:23.514 bus/dpaa: not in enabled drivers build config 00:02:23.514 bus/fslmc: not in enabled drivers build config 00:02:23.514 bus/ifpga: not in enabled drivers build config 00:02:23.514 bus/platform: not in enabled drivers build config 00:02:23.514 bus/uacce: not in enabled drivers build config 00:02:23.514 bus/vmbus: not in enabled drivers build config 00:02:23.514 common/cnxk: not in enabled drivers build config 00:02:23.514 common/mlx5: not in enabled drivers build config 00:02:23.514 common/nfp: not in enabled drivers build config 00:02:23.514 common/nitrox: not in enabled drivers build config 00:02:23.514 common/qat: not in enabled drivers build config 00:02:23.514 common/sfc_efx: not in enabled drivers build config 00:02:23.514 mempool/bucket: not in enabled drivers build config 00:02:23.514 mempool/cnxk: not in enabled drivers build config 00:02:23.514 mempool/dpaa: not in enabled drivers build config 00:02:23.514 mempool/dpaa2: not in enabled drivers build config 00:02:23.514 mempool/octeontx: not in enabled drivers build config 00:02:23.514 mempool/stack: not in enabled drivers build config 00:02:23.514 dma/cnxk: not in enabled drivers build config 00:02:23.514 dma/dpaa: not in enabled drivers build config 00:02:23.514 dma/dpaa2: not in enabled drivers build config 00:02:23.514 dma/hisilicon: not in enabled drivers build config 00:02:23.514 dma/idxd: not in enabled drivers build config 00:02:23.514 dma/ioat: not in enabled drivers build config 00:02:23.514 dma/skeleton: not in enabled drivers build config 00:02:23.514 net/af_packet: not in enabled drivers build config 00:02:23.514 net/af_xdp: not in enabled drivers build config 00:02:23.514 net/ark: not in enabled drivers build config 00:02:23.514 net/atlantic: not in enabled drivers build config 00:02:23.514 net/avp: not in enabled drivers build config 00:02:23.514 net/axgbe: not in enabled drivers build config 00:02:23.514 net/bnx2x: not in enabled drivers build config 00:02:23.514 net/bnxt: not in enabled drivers build config 00:02:23.514 net/bonding: not in enabled drivers build config 00:02:23.514 net/cnxk: not in enabled drivers build config 00:02:23.514 net/cpfl: not in enabled drivers build config 00:02:23.514 net/cxgbe: not in enabled drivers build config 00:02:23.514 net/dpaa: not in enabled drivers build config 00:02:23.514 net/dpaa2: not in enabled drivers build config 00:02:23.514 net/e1000: not in enabled drivers build config 00:02:23.514 net/ena: not in enabled drivers build config 00:02:23.514 net/enetc: not in enabled drivers build config 00:02:23.514 net/enetfec: not in enabled drivers build config 00:02:23.514 net/enic: not in enabled drivers build config 00:02:23.514 net/failsafe: not in enabled drivers build config 00:02:23.514 net/fm10k: not in enabled drivers build config 00:02:23.514 net/gve: not in enabled drivers build config 00:02:23.514 net/hinic: not in enabled drivers build config 00:02:23.514 net/hns3: not in enabled drivers build config 00:02:23.514 net/i40e: not in enabled drivers build config 00:02:23.514 net/iavf: not in enabled drivers build config 00:02:23.514 net/ice: not in enabled drivers build config 00:02:23.514 net/idpf: not in enabled drivers build config 00:02:23.514 net/igc: not in enabled drivers build config 00:02:23.514 net/ionic: not in enabled drivers build config 00:02:23.514 net/ipn3ke: not in enabled drivers build config 00:02:23.514 net/ixgbe: not in enabled drivers build config 00:02:23.514 net/mana: not in enabled drivers build config 00:02:23.514 net/memif: not in enabled drivers build config 00:02:23.514 net/mlx4: not in enabled drivers build config 00:02:23.514 net/mlx5: not in enabled drivers build config 00:02:23.514 net/mvneta: not in enabled drivers build config 00:02:23.514 net/mvpp2: not in enabled drivers build config 00:02:23.514 net/netvsc: not in enabled drivers build config 00:02:23.514 net/nfb: not in enabled drivers build config 00:02:23.514 net/nfp: not in enabled drivers build config 00:02:23.514 net/ngbe: not in enabled drivers build config 00:02:23.514 net/null: not in enabled drivers build config 00:02:23.514 net/octeontx: not in enabled drivers build config 00:02:23.514 net/octeon_ep: not in enabled drivers build config 00:02:23.514 net/pcap: not in enabled drivers build config 00:02:23.514 net/pfe: not in enabled drivers build config 00:02:23.514 net/qede: not in enabled drivers build config 00:02:23.514 net/ring: not in enabled drivers build config 00:02:23.514 net/sfc: not in enabled drivers build config 00:02:23.514 net/softnic: not in enabled drivers build config 00:02:23.514 net/tap: not in enabled drivers build config 00:02:23.514 net/thunderx: not in enabled drivers build config 00:02:23.514 net/txgbe: not in enabled drivers build config 00:02:23.514 net/vdev_netvsc: not in enabled drivers build config 00:02:23.514 net/vhost: not in enabled drivers build config 00:02:23.514 net/virtio: not in enabled drivers build config 00:02:23.514 net/vmxnet3: not in enabled drivers build config 00:02:23.514 raw/*: missing internal dependency, "rawdev" 00:02:23.514 crypto/armv8: not in enabled drivers build config 00:02:23.514 crypto/bcmfs: not in enabled drivers build config 00:02:23.514 crypto/caam_jr: not in enabled drivers build config 00:02:23.514 crypto/ccp: not in enabled drivers build config 00:02:23.514 crypto/cnxk: not in enabled drivers build config 00:02:23.514 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.514 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.514 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.514 crypto/mlx5: not in enabled drivers build config 00:02:23.514 crypto/mvsam: not in enabled drivers build config 00:02:23.514 crypto/nitrox: not in enabled drivers build config 00:02:23.514 crypto/null: not in enabled drivers build config 00:02:23.514 crypto/octeontx: not in enabled drivers build config 00:02:23.514 crypto/openssl: not in enabled drivers build config 00:02:23.514 crypto/scheduler: not in enabled drivers build config 00:02:23.514 crypto/uadk: not in enabled drivers build config 00:02:23.514 crypto/virtio: not in enabled drivers build config 00:02:23.514 compress/isal: not in enabled drivers build config 00:02:23.514 compress/mlx5: not in enabled drivers build config 00:02:23.514 compress/nitrox: not in enabled drivers build config 00:02:23.514 compress/octeontx: not in enabled drivers build config 00:02:23.515 compress/zlib: not in enabled drivers build config 00:02:23.515 regex/*: missing internal dependency, "regexdev" 00:02:23.515 ml/*: missing internal dependency, "mldev" 00:02:23.515 vdpa/ifc: not in enabled drivers build config 00:02:23.515 vdpa/mlx5: not in enabled drivers build config 00:02:23.515 vdpa/nfp: not in enabled drivers build config 00:02:23.515 vdpa/sfc: not in enabled drivers build config 00:02:23.515 event/*: missing internal dependency, "eventdev" 00:02:23.515 baseband/*: missing internal dependency, "bbdev" 00:02:23.515 gpu/*: missing internal dependency, "gpudev" 00:02:23.515 00:02:23.515 00:02:23.773 Build targets in project: 84 00:02:23.773 00:02:23.773 DPDK 24.03.0 00:02:23.773 00:02:23.773 User defined options 00:02:23.773 buildtype : debug 00:02:23.773 default_library : shared 00:02:23.773 libdir : lib 00:02:23.773 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:23.773 b_sanitize : address 00:02:23.773 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:23.773 c_link_args : 00:02:23.773 cpu_instruction_set: native 00:02:23.773 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:23.773 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:23.773 enable_docs : false 00:02:23.773 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:23.773 enable_kmods : false 00:02:23.773 max_lcores : 128 00:02:23.773 tests : false 00:02:23.773 00:02:23.773 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.032 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:24.290 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.290 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:24.290 [3/267] Linking static target lib/librte_kvargs.a 00:02:24.290 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.290 [5/267] Linking static target lib/librte_log.a 00:02:24.290 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.548 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.548 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.548 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.548 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.548 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.548 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.548 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.548 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.806 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.806 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.806 [17/267] Linking static target lib/librte_telemetry.a 00:02:24.806 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.806 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.066 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:25.066 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.066 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:25.066 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.066 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:25.066 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:25.066 [26/267] Linking target lib/librte_log.so.24.1 00:02:25.066 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:25.342 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.342 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:25.342 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:25.342 [31/267] Linking target lib/librte_kvargs.so.24.1 00:02:25.342 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.342 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.342 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.600 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.600 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:25.600 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.600 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.600 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.600 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.600 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.600 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.600 [43/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.600 [44/267] Linking target lib/librte_telemetry.so.24.1 00:02:25.600 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:25.857 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:25.857 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.857 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:25.857 [49/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:25.857 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:25.857 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.857 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:25.858 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.116 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.116 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.116 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.116 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.116 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.116 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.116 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.374 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.374 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.374 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.374 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.374 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.374 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.632 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.632 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:26.632 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:26.632 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:26.632 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:26.632 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:26.632 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:26.632 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:26.891 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:26.891 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:26.891 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:26.891 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:26.891 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.150 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.150 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.150 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:27.150 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:27.150 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:27.150 [85/267] Linking static target lib/librte_ring.a 00:02:27.150 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.150 [87/267] Linking static target lib/librte_eal.a 00:02:27.408 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.408 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.408 [90/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:27.408 [91/267] Linking static target lib/librte_rcu.a 00:02:27.408 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.408 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.666 [94/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.666 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:27.666 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.666 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.666 [98/267] Linking static target lib/librte_mempool.a 00:02:27.666 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.925 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:27.925 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:27.925 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:27.925 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.925 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.183 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.183 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.183 [107/267] Linking static target lib/librte_meter.a 00:02:28.183 [108/267] Linking static target lib/librte_mbuf.a 00:02:28.183 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.183 [110/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:28.183 [111/267] Linking static target lib/librte_net.a 00:02:28.183 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.183 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.441 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.441 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.441 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.699 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:28.699 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:28.699 [119/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.699 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:28.958 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:28.958 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.958 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:28.958 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:28.958 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:28.958 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:28.958 [127/267] Linking static target lib/librte_pci.a 00:02:28.958 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:29.216 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:29.216 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:29.216 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:29.216 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.216 [133/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.216 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:29.216 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:29.474 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.474 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:29.474 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:29.474 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:29.474 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:29.474 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:29.474 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.474 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:29.474 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:29.474 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:29.474 [146/267] Linking static target lib/librte_cmdline.a 00:02:29.474 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:29.732 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:29.732 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:29.732 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:29.732 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:29.732 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:29.990 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:29.990 [154/267] Linking static target lib/librte_timer.a 00:02:29.990 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:29.990 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:29.990 [157/267] Linking static target lib/librte_compressdev.a 00:02:30.248 [158/267] Linking static target lib/librte_ethdev.a 00:02:30.248 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.248 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.248 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:30.248 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:30.507 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:30.507 [164/267] Linking static target lib/librte_dmadev.a 00:02:30.507 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.507 [166/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.507 [167/267] Linking static target lib/librte_hash.a 00:02:30.507 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:30.507 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:30.765 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:30.766 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:30.766 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.766 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:30.766 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.023 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:31.023 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:31.023 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:31.023 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:31.023 [179/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.023 [180/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:31.023 [181/267] Linking static target lib/librte_cryptodev.a 00:02:31.023 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:31.282 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:31.282 [184/267] Linking static target lib/librte_power.a 00:02:31.282 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.540 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:31.540 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.540 [188/267] Linking static target lib/librte_reorder.a 00:02:31.540 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.540 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.540 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.540 [192/267] Linking static target lib/librte_security.a 00:02:31.799 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.057 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:32.057 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.057 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:32.315 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.315 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:32.315 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.315 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:32.573 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.573 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.573 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.573 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.574 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:32.832 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.832 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.832 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.832 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.832 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.091 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:33.091 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:33.091 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.091 [214/267] Linking static target drivers/librte_bus_vdev.a 00:02:33.091 [215/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.091 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.091 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.091 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:33.091 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:33.091 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:33.091 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:33.091 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.091 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.091 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:33.349 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.349 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.608 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:34.981 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.981 [229/267] Linking target lib/librte_eal.so.24.1 00:02:34.981 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:34.981 [231/267] Linking target lib/librte_pci.so.24.1 00:02:34.981 [232/267] Linking target lib/librte_meter.so.24.1 00:02:34.981 [233/267] Linking target lib/librte_ring.so.24.1 00:02:34.981 [234/267] Linking target lib/librte_timer.so.24.1 00:02:34.981 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:34.981 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:34.981 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:34.981 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:34.981 [239/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:34.981 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:34.981 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:34.981 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:35.240 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:35.240 [244/267] Linking target lib/librte_mempool.so.24.1 00:02:35.240 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.240 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.240 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.240 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:35.240 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.498 [250/267] Linking target lib/librte_cryptodev.so.24.1 00:02:35.498 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:35.498 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:35.498 [253/267] Linking target lib/librte_net.so.24.1 00:02:35.498 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:35.498 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:35.498 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:35.498 [257/267] Linking target lib/librte_security.so.24.1 00:02:35.498 [258/267] Linking target lib/librte_hash.so.24.1 00:02:35.756 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.756 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:35.756 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:35.756 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:36.016 [263/267] Linking target lib/librte_power.so.24.1 00:02:36.949 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.949 [265/267] Linking static target lib/librte_vhost.a 00:02:38.359 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.359 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:38.359 INFO: autodetecting backend as ninja 00:02:38.359 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:50.558 CC lib/log/log_flags.o 00:02:50.558 CC lib/log/log_deprecated.o 00:02:50.558 CC lib/log/log.o 00:02:50.558 CC lib/ut_mock/mock.o 00:02:50.558 CC lib/ut/ut.o 00:02:50.816 LIB libspdk_ut_mock.a 00:02:50.816 LIB libspdk_log.a 00:02:50.816 LIB libspdk_ut.a 00:02:50.816 SO libspdk_ut_mock.so.6.0 00:02:50.816 SO libspdk_log.so.7.0 00:02:50.816 SO libspdk_ut.so.2.0 00:02:50.816 SYMLINK libspdk_log.so 00:02:50.816 SYMLINK libspdk_ut_mock.so 00:02:50.816 SYMLINK libspdk_ut.so 00:02:51.073 CXX lib/trace_parser/trace.o 00:02:51.073 CC lib/util/base64.o 00:02:51.073 CC lib/util/bit_array.o 00:02:51.073 CC lib/util/crc16.o 00:02:51.073 CC lib/util/cpuset.o 00:02:51.073 CC lib/ioat/ioat.o 00:02:51.073 CC lib/util/crc32.o 00:02:51.073 CC lib/util/crc32c.o 00:02:51.073 CC lib/dma/dma.o 00:02:51.073 CC lib/vfio_user/host/vfio_user_pci.o 00:02:51.073 CC lib/util/crc32_ieee.o 00:02:51.073 CC lib/util/crc64.o 00:02:51.073 CC lib/util/dif.o 00:02:51.073 CC lib/util/fd.o 00:02:51.331 CC lib/util/fd_group.o 00:02:51.331 CC lib/util/file.o 00:02:51.331 LIB libspdk_dma.a 00:02:51.331 CC lib/util/hexlify.o 00:02:51.331 CC lib/util/iov.o 00:02:51.331 SO libspdk_dma.so.5.0 00:02:51.331 CC lib/vfio_user/host/vfio_user.o 00:02:51.331 LIB libspdk_ioat.a 00:02:51.331 SYMLINK libspdk_dma.so 00:02:51.331 CC lib/util/math.o 00:02:51.331 SO libspdk_ioat.so.7.0 00:02:51.331 CC lib/util/net.o 00:02:51.331 CC lib/util/pipe.o 00:02:51.331 CC lib/util/strerror_tls.o 00:02:51.331 CC lib/util/string.o 00:02:51.331 SYMLINK libspdk_ioat.so 00:02:51.331 CC lib/util/uuid.o 00:02:51.331 CC lib/util/xor.o 00:02:51.331 CC lib/util/zipf.o 00:02:51.331 CC lib/util/md5.o 00:02:51.589 LIB libspdk_vfio_user.a 00:02:51.589 SO libspdk_vfio_user.so.5.0 00:02:51.589 SYMLINK libspdk_vfio_user.so 00:02:51.847 LIB libspdk_util.a 00:02:51.848 SO libspdk_util.so.10.0 00:02:51.848 LIB libspdk_trace_parser.a 00:02:51.848 SO libspdk_trace_parser.so.6.0 00:02:51.848 SYMLINK libspdk_util.so 00:02:51.848 SYMLINK libspdk_trace_parser.so 00:02:52.105 CC lib/idxd/idxd.o 00:02:52.105 CC lib/idxd/idxd_kernel.o 00:02:52.105 CC lib/idxd/idxd_user.o 00:02:52.105 CC lib/rdma_utils/rdma_utils.o 00:02:52.105 CC lib/json/json_util.o 00:02:52.106 CC lib/json/json_parse.o 00:02:52.106 CC lib/env_dpdk/env.o 00:02:52.106 CC lib/rdma_provider/common.o 00:02:52.106 CC lib/conf/conf.o 00:02:52.106 CC lib/vmd/vmd.o 00:02:52.106 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:52.106 CC lib/json/json_write.o 00:02:52.364 CC lib/vmd/led.o 00:02:52.364 LIB libspdk_conf.a 00:02:52.364 CC lib/env_dpdk/memory.o 00:02:52.364 CC lib/env_dpdk/pci.o 00:02:52.364 SO libspdk_conf.so.6.0 00:02:52.364 LIB libspdk_rdma_utils.a 00:02:52.364 SO libspdk_rdma_utils.so.1.0 00:02:52.364 CC lib/env_dpdk/init.o 00:02:52.364 SYMLINK libspdk_conf.so 00:02:52.364 CC lib/env_dpdk/threads.o 00:02:52.364 LIB libspdk_rdma_provider.a 00:02:52.364 SYMLINK libspdk_rdma_utils.so 00:02:52.364 CC lib/env_dpdk/pci_ioat.o 00:02:52.364 SO libspdk_rdma_provider.so.6.0 00:02:52.364 SYMLINK libspdk_rdma_provider.so 00:02:52.364 CC lib/env_dpdk/pci_virtio.o 00:02:52.364 CC lib/env_dpdk/pci_vmd.o 00:02:52.622 CC lib/env_dpdk/pci_idxd.o 00:02:52.622 LIB libspdk_json.a 00:02:52.622 SO libspdk_json.so.6.0 00:02:52.622 CC lib/env_dpdk/pci_event.o 00:02:52.622 SYMLINK libspdk_json.so 00:02:52.622 CC lib/env_dpdk/sigbus_handler.o 00:02:52.622 CC lib/env_dpdk/pci_dpdk.o 00:02:52.622 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:52.622 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:52.622 LIB libspdk_idxd.a 00:02:52.622 SO libspdk_idxd.so.12.1 00:02:52.622 LIB libspdk_vmd.a 00:02:52.881 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:52.881 CC lib/jsonrpc/jsonrpc_client.o 00:02:52.881 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:52.881 CC lib/jsonrpc/jsonrpc_server.o 00:02:52.881 SYMLINK libspdk_idxd.so 00:02:52.881 SO libspdk_vmd.so.6.0 00:02:52.881 SYMLINK libspdk_vmd.so 00:02:52.881 LIB libspdk_jsonrpc.a 00:02:53.157 SO libspdk_jsonrpc.so.6.0 00:02:53.157 SYMLINK libspdk_jsonrpc.so 00:02:53.417 CC lib/rpc/rpc.o 00:02:53.417 LIB libspdk_rpc.a 00:02:53.417 LIB libspdk_env_dpdk.a 00:02:53.675 SO libspdk_rpc.so.6.0 00:02:53.675 SYMLINK libspdk_rpc.so 00:02:53.675 SO libspdk_env_dpdk.so.15.0 00:02:53.675 SYMLINK libspdk_env_dpdk.so 00:02:53.675 CC lib/trace/trace.o 00:02:53.675 CC lib/trace/trace_rpc.o 00:02:53.675 CC lib/trace/trace_flags.o 00:02:53.675 CC lib/notify/notify_rpc.o 00:02:53.675 CC lib/notify/notify.o 00:02:53.675 CC lib/keyring/keyring.o 00:02:53.675 CC lib/keyring/keyring_rpc.o 00:02:53.936 LIB libspdk_notify.a 00:02:53.936 SO libspdk_notify.so.6.0 00:02:53.936 LIB libspdk_keyring.a 00:02:53.936 SYMLINK libspdk_notify.so 00:02:53.936 LIB libspdk_trace.a 00:02:53.936 SO libspdk_keyring.so.2.0 00:02:53.936 SO libspdk_trace.so.11.0 00:02:54.198 SYMLINK libspdk_keyring.so 00:02:54.198 SYMLINK libspdk_trace.so 00:02:54.459 CC lib/sock/sock.o 00:02:54.459 CC lib/thread/thread.o 00:02:54.459 CC lib/sock/sock_rpc.o 00:02:54.459 CC lib/thread/iobuf.o 00:02:54.719 LIB libspdk_sock.a 00:02:54.719 SO libspdk_sock.so.10.0 00:02:54.979 SYMLINK libspdk_sock.so 00:02:55.238 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:55.238 CC lib/nvme/nvme_ns_cmd.o 00:02:55.238 CC lib/nvme/nvme_fabric.o 00:02:55.238 CC lib/nvme/nvme_ctrlr.o 00:02:55.238 CC lib/nvme/nvme_pcie.o 00:02:55.238 CC lib/nvme/nvme.o 00:02:55.238 CC lib/nvme/nvme_ns.o 00:02:55.238 CC lib/nvme/nvme_pcie_common.o 00:02:55.238 CC lib/nvme/nvme_qpair.o 00:02:55.804 CC lib/nvme/nvme_quirks.o 00:02:55.804 CC lib/nvme/nvme_transport.o 00:02:55.804 CC lib/nvme/nvme_discovery.o 00:02:55.804 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.804 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.804 LIB libspdk_thread.a 00:02:56.062 CC lib/nvme/nvme_tcp.o 00:02:56.062 SO libspdk_thread.so.10.1 00:02:56.062 CC lib/nvme/nvme_opal.o 00:02:56.062 SYMLINK libspdk_thread.so 00:02:56.062 CC lib/nvme/nvme_io_msg.o 00:02:56.062 CC lib/nvme/nvme_poll_group.o 00:02:56.322 CC lib/nvme/nvme_zns.o 00:02:56.322 CC lib/nvme/nvme_stubs.o 00:02:56.322 CC lib/nvme/nvme_auth.o 00:02:56.580 CC lib/nvme/nvme_cuse.o 00:02:56.580 CC lib/accel/accel.o 00:02:56.580 CC lib/nvme/nvme_rdma.o 00:02:56.580 CC lib/blob/blobstore.o 00:02:56.838 CC lib/accel/accel_rpc.o 00:02:56.838 CC lib/init/json_config.o 00:02:56.838 CC lib/init/subsystem.o 00:02:56.838 CC lib/init/subsystem_rpc.o 00:02:56.838 CC lib/init/rpc.o 00:02:57.096 CC lib/accel/accel_sw.o 00:02:57.096 CC lib/blob/request.o 00:02:57.096 LIB libspdk_init.a 00:02:57.096 SO libspdk_init.so.6.0 00:02:57.096 SYMLINK libspdk_init.so 00:02:57.354 CC lib/blob/zeroes.o 00:02:57.354 CC lib/blob/blob_bs_dev.o 00:02:57.354 CC lib/virtio/virtio.o 00:02:57.354 CC lib/virtio/virtio_vhost_user.o 00:02:57.354 CC lib/fsdev/fsdev.o 00:02:57.354 CC lib/event/app.o 00:02:57.354 CC lib/fsdev/fsdev_io.o 00:02:57.354 CC lib/fsdev/fsdev_rpc.o 00:02:57.354 CC lib/event/reactor.o 00:02:57.612 CC lib/virtio/virtio_vfio_user.o 00:02:57.612 CC lib/virtio/virtio_pci.o 00:02:57.612 LIB libspdk_accel.a 00:02:57.612 SO libspdk_accel.so.16.0 00:02:57.612 CC lib/event/log_rpc.o 00:02:57.612 CC lib/event/app_rpc.o 00:02:57.612 SYMLINK libspdk_accel.so 00:02:57.612 CC lib/event/scheduler_static.o 00:02:57.870 LIB libspdk_fsdev.a 00:02:57.870 SO libspdk_fsdev.so.1.0 00:02:57.870 CC lib/bdev/bdev.o 00:02:57.870 CC lib/bdev/bdev_rpc.o 00:02:57.870 CC lib/bdev/bdev_zone.o 00:02:57.870 CC lib/bdev/scsi_nvme.o 00:02:57.870 CC lib/bdev/part.o 00:02:57.870 LIB libspdk_virtio.a 00:02:57.870 LIB libspdk_event.a 00:02:57.870 LIB libspdk_nvme.a 00:02:57.870 SYMLINK libspdk_fsdev.so 00:02:57.870 SO libspdk_virtio.so.7.0 00:02:57.870 SO libspdk_event.so.14.0 00:02:57.870 SYMLINK libspdk_virtio.so 00:02:57.870 SYMLINK libspdk_event.so 00:02:58.128 SO libspdk_nvme.so.14.0 00:02:58.128 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:58.386 SYMLINK libspdk_nvme.so 00:02:58.646 LIB libspdk_fuse_dispatcher.a 00:02:58.646 SO libspdk_fuse_dispatcher.so.1.0 00:02:58.646 SYMLINK libspdk_fuse_dispatcher.so 00:03:00.022 LIB libspdk_blob.a 00:03:00.022 SO libspdk_blob.so.11.0 00:03:00.022 SYMLINK libspdk_blob.so 00:03:00.280 CC lib/lvol/lvol.o 00:03:00.280 CC lib/blobfs/blobfs.o 00:03:00.280 CC lib/blobfs/tree.o 00:03:00.538 LIB libspdk_bdev.a 00:03:00.538 SO libspdk_bdev.so.16.0 00:03:00.798 SYMLINK libspdk_bdev.so 00:03:00.798 CC lib/ftl/ftl_core.o 00:03:00.798 CC lib/ftl/ftl_init.o 00:03:00.798 CC lib/ftl/ftl_layout.o 00:03:00.798 CC lib/ftl/ftl_debug.o 00:03:00.798 CC lib/nvmf/ctrlr.o 00:03:00.798 CC lib/ublk/ublk.o 00:03:00.798 CC lib/nbd/nbd.o 00:03:00.798 CC lib/scsi/dev.o 00:03:01.056 CC lib/scsi/lun.o 00:03:01.056 LIB libspdk_lvol.a 00:03:01.056 CC lib/nbd/nbd_rpc.o 00:03:01.056 SO libspdk_lvol.so.10.0 00:03:01.056 CC lib/scsi/port.o 00:03:01.056 SYMLINK libspdk_lvol.so 00:03:01.056 CC lib/scsi/scsi.o 00:03:01.056 CC lib/scsi/scsi_bdev.o 00:03:01.056 LIB libspdk_blobfs.a 00:03:01.314 CC lib/ftl/ftl_io.o 00:03:01.314 SO libspdk_blobfs.so.10.0 00:03:01.314 CC lib/scsi/scsi_pr.o 00:03:01.314 CC lib/nvmf/ctrlr_discovery.o 00:03:01.314 CC lib/ftl/ftl_sb.o 00:03:01.314 SYMLINK libspdk_blobfs.so 00:03:01.314 LIB libspdk_nbd.a 00:03:01.314 CC lib/ftl/ftl_l2p.o 00:03:01.314 SO libspdk_nbd.so.7.0 00:03:01.314 CC lib/ftl/ftl_l2p_flat.o 00:03:01.314 SYMLINK libspdk_nbd.so 00:03:01.314 CC lib/scsi/scsi_rpc.o 00:03:01.314 CC lib/scsi/task.o 00:03:01.314 CC lib/ftl/ftl_nv_cache.o 00:03:01.628 CC lib/ftl/ftl_band.o 00:03:01.628 CC lib/ftl/ftl_band_ops.o 00:03:01.629 CC lib/ublk/ublk_rpc.o 00:03:01.629 CC lib/ftl/ftl_writer.o 00:03:01.629 CC lib/ftl/ftl_rq.o 00:03:01.629 CC lib/nvmf/ctrlr_bdev.o 00:03:01.629 LIB libspdk_scsi.a 00:03:01.629 SO libspdk_scsi.so.9.0 00:03:01.629 LIB libspdk_ublk.a 00:03:01.629 SO libspdk_ublk.so.3.0 00:03:01.629 CC lib/nvmf/subsystem.o 00:03:01.629 CC lib/ftl/ftl_reloc.o 00:03:01.629 SYMLINK libspdk_scsi.so 00:03:01.629 CC lib/nvmf/nvmf.o 00:03:01.629 SYMLINK libspdk_ublk.so 00:03:01.629 CC lib/ftl/ftl_l2p_cache.o 00:03:01.904 CC lib/ftl/ftl_p2l.o 00:03:01.904 CC lib/nvmf/nvmf_rpc.o 00:03:01.904 CC lib/ftl/ftl_p2l_log.o 00:03:02.163 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.163 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.163 CC lib/iscsi/conn.o 00:03:02.163 CC lib/vhost/vhost.o 00:03:02.421 CC lib/vhost/vhost_rpc.o 00:03:02.421 CC lib/iscsi/init_grp.o 00:03:02.421 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.421 CC lib/iscsi/iscsi.o 00:03:02.679 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.679 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.679 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.679 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.679 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.679 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.679 CC lib/vhost/vhost_scsi.o 00:03:02.679 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:02.938 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:02.938 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:02.938 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:02.938 CC lib/ftl/utils/ftl_conf.o 00:03:02.938 CC lib/nvmf/transport.o 00:03:02.938 CC lib/vhost/vhost_blk.o 00:03:02.938 CC lib/vhost/rte_vhost_user.o 00:03:02.938 CC lib/iscsi/param.o 00:03:02.938 CC lib/ftl/utils/ftl_md.o 00:03:02.938 CC lib/ftl/utils/ftl_mempool.o 00:03:03.196 CC lib/nvmf/tcp.o 00:03:03.196 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.196 CC lib/nvmf/stubs.o 00:03:03.196 CC lib/nvmf/mdns_server.o 00:03:03.454 CC lib/nvmf/rdma.o 00:03:03.454 CC lib/ftl/utils/ftl_property.o 00:03:03.454 CC lib/nvmf/auth.o 00:03:03.454 CC lib/iscsi/portal_grp.o 00:03:03.454 CC lib/iscsi/tgt_node.o 00:03:03.712 CC lib/iscsi/iscsi_subsystem.o 00:03:03.712 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.712 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.712 CC lib/iscsi/iscsi_rpc.o 00:03:03.971 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.971 CC lib/iscsi/task.o 00:03:03.971 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.971 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.971 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.971 LIB libspdk_vhost.a 00:03:03.971 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.971 SO libspdk_vhost.so.8.0 00:03:03.971 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.971 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.971 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.971 LIB libspdk_iscsi.a 00:03:04.229 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.229 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:04.229 SYMLINK libspdk_vhost.so 00:03:04.229 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:04.229 CC lib/ftl/base/ftl_base_dev.o 00:03:04.229 SO libspdk_iscsi.so.8.0 00:03:04.229 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.229 CC lib/ftl/ftl_trace.o 00:03:04.229 SYMLINK libspdk_iscsi.so 00:03:04.488 LIB libspdk_ftl.a 00:03:04.746 SO libspdk_ftl.so.9.0 00:03:05.003 SYMLINK libspdk_ftl.so 00:03:05.260 LIB libspdk_nvmf.a 00:03:05.260 SO libspdk_nvmf.so.19.0 00:03:05.519 SYMLINK libspdk_nvmf.so 00:03:05.777 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.777 CC module/keyring/file/keyring.o 00:03:05.777 CC module/keyring/linux/keyring.o 00:03:05.777 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.777 CC module/blob/bdev/blob_bdev.o 00:03:05.777 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.777 CC module/accel/error/accel_error.o 00:03:05.777 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.777 CC module/sock/posix/posix.o 00:03:05.777 CC module/fsdev/aio/fsdev_aio.o 00:03:05.777 LIB libspdk_env_dpdk_rpc.a 00:03:05.777 SO libspdk_env_dpdk_rpc.so.6.0 00:03:06.035 CC module/keyring/linux/keyring_rpc.o 00:03:06.035 CC module/keyring/file/keyring_rpc.o 00:03:06.035 LIB libspdk_scheduler_gscheduler.a 00:03:06.035 LIB libspdk_scheduler_dpdk_governor.a 00:03:06.035 SO libspdk_scheduler_gscheduler.so.4.0 00:03:06.035 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:06.035 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.035 CC module/accel/error/accel_error_rpc.o 00:03:06.035 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:06.035 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:06.035 SYMLINK libspdk_scheduler_gscheduler.so 00:03:06.035 CC module/fsdev/aio/linux_aio_mgr.o 00:03:06.035 LIB libspdk_keyring_linux.a 00:03:06.035 LIB libspdk_keyring_file.a 00:03:06.035 LIB libspdk_scheduler_dynamic.a 00:03:06.035 SO libspdk_keyring_linux.so.1.0 00:03:06.035 LIB libspdk_blob_bdev.a 00:03:06.035 SO libspdk_keyring_file.so.2.0 00:03:06.035 SO libspdk_blob_bdev.so.11.0 00:03:06.035 SO libspdk_scheduler_dynamic.so.4.0 00:03:06.035 SYMLINK libspdk_keyring_linux.so 00:03:06.035 SYMLINK libspdk_keyring_file.so 00:03:06.035 LIB libspdk_accel_error.a 00:03:06.035 SYMLINK libspdk_blob_bdev.so 00:03:06.035 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.035 CC module/accel/ioat/accel_ioat.o 00:03:06.035 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.035 SO libspdk_accel_error.so.2.0 00:03:06.035 SYMLINK libspdk_accel_error.so 00:03:06.293 CC module/accel/iaa/accel_iaa.o 00:03:06.293 CC module/accel/dsa/accel_dsa.o 00:03:06.293 LIB libspdk_accel_ioat.a 00:03:06.293 SO libspdk_accel_ioat.so.6.0 00:03:06.293 CC module/bdev/delay/vbdev_delay.o 00:03:06.294 CC module/bdev/gpt/gpt.o 00:03:06.294 CC module/blobfs/bdev/blobfs_bdev.o 00:03:06.294 CC module/bdev/error/vbdev_error.o 00:03:06.294 SYMLINK libspdk_accel_ioat.so 00:03:06.294 CC module/bdev/error/vbdev_error_rpc.o 00:03:06.294 CC module/bdev/lvol/vbdev_lvol.o 00:03:06.294 LIB libspdk_fsdev_aio.a 00:03:06.294 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.294 SO libspdk_fsdev_aio.so.1.0 00:03:06.552 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:06.552 CC module/bdev/gpt/vbdev_gpt.o 00:03:06.552 LIB libspdk_sock_posix.a 00:03:06.552 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:06.552 SYMLINK libspdk_fsdev_aio.so 00:03:06.552 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.552 SO libspdk_sock_posix.so.6.0 00:03:06.552 LIB libspdk_accel_iaa.a 00:03:06.552 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.552 SO libspdk_accel_iaa.so.3.0 00:03:06.552 SYMLINK libspdk_sock_posix.so 00:03:06.552 LIB libspdk_bdev_error.a 00:03:06.552 SYMLINK libspdk_accel_iaa.so 00:03:06.552 LIB libspdk_blobfs_bdev.a 00:03:06.552 SO libspdk_bdev_error.so.6.0 00:03:06.552 SO libspdk_blobfs_bdev.so.6.0 00:03:06.552 LIB libspdk_bdev_delay.a 00:03:06.552 LIB libspdk_accel_dsa.a 00:03:06.552 SYMLINK libspdk_bdev_error.so 00:03:06.552 SO libspdk_accel_dsa.so.5.0 00:03:06.552 SO libspdk_bdev_delay.so.6.0 00:03:06.552 LIB libspdk_bdev_gpt.a 00:03:06.552 SYMLINK libspdk_blobfs_bdev.so 00:03:06.552 SO libspdk_bdev_gpt.so.6.0 00:03:06.552 SYMLINK libspdk_bdev_delay.so 00:03:06.552 CC module/bdev/malloc/bdev_malloc.o 00:03:06.552 SYMLINK libspdk_accel_dsa.so 00:03:06.552 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:06.810 CC module/bdev/null/bdev_null.o 00:03:06.810 SYMLINK libspdk_bdev_gpt.so 00:03:06.810 CC module/bdev/null/bdev_null_rpc.o 00:03:06.810 CC module/bdev/nvme/bdev_nvme.o 00:03:06.810 CC module/bdev/passthru/vbdev_passthru.o 00:03:06.810 LIB libspdk_bdev_lvol.a 00:03:06.810 CC module/bdev/raid/bdev_raid.o 00:03:06.810 CC module/bdev/split/vbdev_split.o 00:03:06.810 SO libspdk_bdev_lvol.so.6.0 00:03:06.810 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.810 SYMLINK libspdk_bdev_lvol.so 00:03:06.810 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:06.810 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.810 LIB libspdk_bdev_null.a 00:03:06.810 SO libspdk_bdev_null.so.6.0 00:03:07.068 SYMLINK libspdk_bdev_null.so 00:03:07.068 LIB libspdk_bdev_split.a 00:03:07.068 CC module/bdev/xnvme/bdev_xnvme.o 00:03:07.068 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:07.068 LIB libspdk_bdev_passthru.a 00:03:07.068 SO libspdk_bdev_split.so.6.0 00:03:07.068 SO libspdk_bdev_passthru.so.6.0 00:03:07.068 LIB libspdk_bdev_malloc.a 00:03:07.068 SYMLINK libspdk_bdev_split.so 00:03:07.068 SO libspdk_bdev_malloc.so.6.0 00:03:07.068 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.068 CC module/bdev/ftl/bdev_ftl.o 00:03:07.068 SYMLINK libspdk_bdev_passthru.so 00:03:07.068 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.068 CC module/bdev/aio/bdev_aio.o 00:03:07.068 SYMLINK libspdk_bdev_malloc.so 00:03:07.068 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.068 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.068 LIB libspdk_bdev_xnvme.a 00:03:07.068 SO libspdk_bdev_xnvme.so.3.0 00:03:07.327 LIB libspdk_bdev_zone_block.a 00:03:07.327 SYMLINK libspdk_bdev_xnvme.so 00:03:07.327 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.327 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.327 SO libspdk_bdev_zone_block.so.6.0 00:03:07.327 SYMLINK libspdk_bdev_zone_block.so 00:03:07.327 LIB libspdk_bdev_ftl.a 00:03:07.327 CC module/bdev/raid/raid0.o 00:03:07.327 CC module/bdev/nvme/nvme_rpc.o 00:03:07.327 SO libspdk_bdev_ftl.so.6.0 00:03:07.327 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.327 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.327 LIB libspdk_bdev_aio.a 00:03:07.327 SYMLINK libspdk_bdev_ftl.so 00:03:07.327 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.327 SO libspdk_bdev_aio.so.6.0 00:03:07.586 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.586 SYMLINK libspdk_bdev_aio.so 00:03:07.586 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.586 CC module/bdev/raid/raid1.o 00:03:07.586 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.586 CC module/bdev/raid/concat.o 00:03:07.586 CC module/bdev/nvme/vbdev_opal.o 00:03:07.586 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.586 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.845 LIB libspdk_bdev_iscsi.a 00:03:07.845 SO libspdk_bdev_iscsi.so.6.0 00:03:07.845 LIB libspdk_bdev_raid.a 00:03:07.845 SYMLINK libspdk_bdev_iscsi.so 00:03:07.845 LIB libspdk_bdev_virtio.a 00:03:07.845 SO libspdk_bdev_raid.so.6.0 00:03:07.845 SO libspdk_bdev_virtio.so.6.0 00:03:07.845 SYMLINK libspdk_bdev_raid.so 00:03:08.104 SYMLINK libspdk_bdev_virtio.so 00:03:09.038 LIB libspdk_bdev_nvme.a 00:03:09.038 SO libspdk_bdev_nvme.so.7.0 00:03:09.296 SYMLINK libspdk_bdev_nvme.so 00:03:09.554 CC module/event/subsystems/vmd/vmd.o 00:03:09.554 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:09.554 CC module/event/subsystems/sock/sock.o 00:03:09.554 CC module/event/subsystems/fsdev/fsdev.o 00:03:09.554 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:09.554 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:09.554 CC module/event/subsystems/iobuf/iobuf.o 00:03:09.554 CC module/event/subsystems/scheduler/scheduler.o 00:03:09.554 CC module/event/subsystems/keyring/keyring.o 00:03:09.812 LIB libspdk_event_sock.a 00:03:09.812 LIB libspdk_event_fsdev.a 00:03:09.812 LIB libspdk_event_scheduler.a 00:03:09.812 LIB libspdk_event_vhost_blk.a 00:03:09.812 SO libspdk_event_sock.so.5.0 00:03:09.812 LIB libspdk_event_vmd.a 00:03:09.812 SO libspdk_event_fsdev.so.1.0 00:03:09.812 SO libspdk_event_vhost_blk.so.3.0 00:03:09.812 LIB libspdk_event_iobuf.a 00:03:09.812 LIB libspdk_event_keyring.a 00:03:09.812 SO libspdk_event_scheduler.so.4.0 00:03:09.812 SO libspdk_event_vmd.so.6.0 00:03:09.812 SO libspdk_event_keyring.so.1.0 00:03:09.812 SO libspdk_event_iobuf.so.3.0 00:03:09.812 SYMLINK libspdk_event_sock.so 00:03:09.812 SYMLINK libspdk_event_fsdev.so 00:03:09.812 SYMLINK libspdk_event_vhost_blk.so 00:03:09.812 SYMLINK libspdk_event_scheduler.so 00:03:09.812 SYMLINK libspdk_event_vmd.so 00:03:09.812 SYMLINK libspdk_event_keyring.so 00:03:09.812 SYMLINK libspdk_event_iobuf.so 00:03:10.069 CC module/event/subsystems/accel/accel.o 00:03:10.069 LIB libspdk_event_accel.a 00:03:10.329 SO libspdk_event_accel.so.6.0 00:03:10.329 SYMLINK libspdk_event_accel.so 00:03:10.588 CC module/event/subsystems/bdev/bdev.o 00:03:10.588 LIB libspdk_event_bdev.a 00:03:10.588 SO libspdk_event_bdev.so.6.0 00:03:10.845 SYMLINK libspdk_event_bdev.so 00:03:10.845 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:10.845 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:10.845 CC module/event/subsystems/nbd/nbd.o 00:03:10.845 CC module/event/subsystems/scsi/scsi.o 00:03:10.845 CC module/event/subsystems/ublk/ublk.o 00:03:11.102 LIB libspdk_event_nbd.a 00:03:11.102 LIB libspdk_event_ublk.a 00:03:11.102 LIB libspdk_event_scsi.a 00:03:11.102 SO libspdk_event_nbd.so.6.0 00:03:11.102 SO libspdk_event_ublk.so.3.0 00:03:11.102 SO libspdk_event_scsi.so.6.0 00:03:11.102 SYMLINK libspdk_event_nbd.so 00:03:11.102 SYMLINK libspdk_event_ublk.so 00:03:11.102 LIB libspdk_event_nvmf.a 00:03:11.102 SYMLINK libspdk_event_scsi.so 00:03:11.102 SO libspdk_event_nvmf.so.6.0 00:03:11.102 SYMLINK libspdk_event_nvmf.so 00:03:11.359 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.359 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.359 LIB libspdk_event_vhost_scsi.a 00:03:11.359 SO libspdk_event_vhost_scsi.so.3.0 00:03:11.359 LIB libspdk_event_iscsi.a 00:03:11.359 SO libspdk_event_iscsi.so.6.0 00:03:11.359 SYMLINK libspdk_event_vhost_scsi.so 00:03:11.617 SYMLINK libspdk_event_iscsi.so 00:03:11.617 SO libspdk.so.6.0 00:03:11.617 SYMLINK libspdk.so 00:03:11.875 CC app/trace_record/trace_record.o 00:03:11.875 CXX app/trace/trace.o 00:03:11.875 CC app/spdk_lspci/spdk_lspci.o 00:03:11.875 CC app/spdk_nvme_perf/perf.o 00:03:11.875 CC app/spdk_nvme_identify/identify.o 00:03:11.875 CC app/nvmf_tgt/nvmf_main.o 00:03:11.875 CC app/iscsi_tgt/iscsi_tgt.o 00:03:11.875 CC app/spdk_tgt/spdk_tgt.o 00:03:11.875 CC examples/util/zipf/zipf.o 00:03:11.875 CC test/thread/poller_perf/poller_perf.o 00:03:11.875 LINK spdk_lspci 00:03:12.133 LINK nvmf_tgt 00:03:12.133 LINK zipf 00:03:12.133 LINK iscsi_tgt 00:03:12.133 LINK poller_perf 00:03:12.133 LINK spdk_trace_record 00:03:12.133 LINK spdk_tgt 00:03:12.133 LINK spdk_trace 00:03:12.133 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.133 CC app/spdk_top/spdk_top.o 00:03:12.391 TEST_HEADER include/spdk/accel.h 00:03:12.391 TEST_HEADER include/spdk/accel_module.h 00:03:12.391 TEST_HEADER include/spdk/assert.h 00:03:12.391 TEST_HEADER include/spdk/barrier.h 00:03:12.391 TEST_HEADER include/spdk/base64.h 00:03:12.391 TEST_HEADER include/spdk/bdev.h 00:03:12.391 TEST_HEADER include/spdk/bdev_module.h 00:03:12.391 TEST_HEADER include/spdk/bdev_zone.h 00:03:12.391 TEST_HEADER include/spdk/bit_array.h 00:03:12.391 TEST_HEADER include/spdk/bit_pool.h 00:03:12.391 TEST_HEADER include/spdk/blob_bdev.h 00:03:12.391 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:12.391 TEST_HEADER include/spdk/blobfs.h 00:03:12.391 TEST_HEADER include/spdk/blob.h 00:03:12.391 TEST_HEADER include/spdk/conf.h 00:03:12.391 TEST_HEADER include/spdk/config.h 00:03:12.391 CC examples/ioat/perf/perf.o 00:03:12.391 TEST_HEADER include/spdk/cpuset.h 00:03:12.391 TEST_HEADER include/spdk/crc16.h 00:03:12.391 TEST_HEADER include/spdk/crc32.h 00:03:12.391 TEST_HEADER include/spdk/crc64.h 00:03:12.391 TEST_HEADER include/spdk/dif.h 00:03:12.391 TEST_HEADER include/spdk/dma.h 00:03:12.391 TEST_HEADER include/spdk/endian.h 00:03:12.391 TEST_HEADER include/spdk/env_dpdk.h 00:03:12.391 TEST_HEADER include/spdk/env.h 00:03:12.391 TEST_HEADER include/spdk/event.h 00:03:12.391 TEST_HEADER include/spdk/fd_group.h 00:03:12.391 TEST_HEADER include/spdk/fd.h 00:03:12.391 TEST_HEADER include/spdk/file.h 00:03:12.391 TEST_HEADER include/spdk/fsdev.h 00:03:12.391 TEST_HEADER include/spdk/fsdev_module.h 00:03:12.391 LINK spdk_nvme_discover 00:03:12.391 TEST_HEADER include/spdk/ftl.h 00:03:12.391 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:12.391 TEST_HEADER include/spdk/gpt_spec.h 00:03:12.391 TEST_HEADER include/spdk/hexlify.h 00:03:12.391 TEST_HEADER include/spdk/histogram_data.h 00:03:12.391 CC test/dma/test_dma/test_dma.o 00:03:12.391 TEST_HEADER include/spdk/idxd.h 00:03:12.391 TEST_HEADER include/spdk/idxd_spec.h 00:03:12.391 TEST_HEADER include/spdk/init.h 00:03:12.391 TEST_HEADER include/spdk/ioat.h 00:03:12.391 TEST_HEADER include/spdk/ioat_spec.h 00:03:12.391 TEST_HEADER include/spdk/iscsi_spec.h 00:03:12.391 TEST_HEADER include/spdk/json.h 00:03:12.391 TEST_HEADER include/spdk/jsonrpc.h 00:03:12.391 TEST_HEADER include/spdk/keyring.h 00:03:12.391 TEST_HEADER include/spdk/keyring_module.h 00:03:12.391 TEST_HEADER include/spdk/likely.h 00:03:12.391 TEST_HEADER include/spdk/log.h 00:03:12.391 TEST_HEADER include/spdk/lvol.h 00:03:12.391 TEST_HEADER include/spdk/md5.h 00:03:12.391 TEST_HEADER include/spdk/memory.h 00:03:12.391 TEST_HEADER include/spdk/mmio.h 00:03:12.391 TEST_HEADER include/spdk/nbd.h 00:03:12.391 TEST_HEADER include/spdk/net.h 00:03:12.391 TEST_HEADER include/spdk/notify.h 00:03:12.391 TEST_HEADER include/spdk/nvme.h 00:03:12.391 TEST_HEADER include/spdk/nvme_intel.h 00:03:12.391 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:12.391 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:12.391 TEST_HEADER include/spdk/nvme_spec.h 00:03:12.391 TEST_HEADER include/spdk/nvme_zns.h 00:03:12.391 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:12.391 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:12.391 CC examples/vmd/lsvmd/lsvmd.o 00:03:12.391 TEST_HEADER include/spdk/nvmf.h 00:03:12.391 TEST_HEADER include/spdk/nvmf_spec.h 00:03:12.391 TEST_HEADER include/spdk/nvmf_transport.h 00:03:12.391 TEST_HEADER include/spdk/opal.h 00:03:12.391 TEST_HEADER include/spdk/opal_spec.h 00:03:12.391 TEST_HEADER include/spdk/pci_ids.h 00:03:12.391 TEST_HEADER include/spdk/pipe.h 00:03:12.391 CC test/app/bdev_svc/bdev_svc.o 00:03:12.391 TEST_HEADER include/spdk/queue.h 00:03:12.391 TEST_HEADER include/spdk/reduce.h 00:03:12.391 TEST_HEADER include/spdk/rpc.h 00:03:12.391 TEST_HEADER include/spdk/scheduler.h 00:03:12.391 CC examples/idxd/perf/perf.o 00:03:12.391 TEST_HEADER include/spdk/scsi.h 00:03:12.391 TEST_HEADER include/spdk/scsi_spec.h 00:03:12.392 TEST_HEADER include/spdk/sock.h 00:03:12.392 TEST_HEADER include/spdk/stdinc.h 00:03:12.392 TEST_HEADER include/spdk/string.h 00:03:12.392 TEST_HEADER include/spdk/thread.h 00:03:12.392 TEST_HEADER include/spdk/trace.h 00:03:12.392 TEST_HEADER include/spdk/trace_parser.h 00:03:12.392 TEST_HEADER include/spdk/tree.h 00:03:12.392 TEST_HEADER include/spdk/ublk.h 00:03:12.392 TEST_HEADER include/spdk/util.h 00:03:12.392 TEST_HEADER include/spdk/uuid.h 00:03:12.392 TEST_HEADER include/spdk/version.h 00:03:12.392 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:12.392 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:12.392 TEST_HEADER include/spdk/vhost.h 00:03:12.392 TEST_HEADER include/spdk/vmd.h 00:03:12.392 TEST_HEADER include/spdk/xor.h 00:03:12.392 TEST_HEADER include/spdk/zipf.h 00:03:12.392 CXX test/cpp_headers/accel.o 00:03:12.392 CXX test/cpp_headers/accel_module.o 00:03:12.392 LINK lsvmd 00:03:12.649 LINK ioat_perf 00:03:12.649 LINK bdev_svc 00:03:12.649 CXX test/cpp_headers/assert.o 00:03:12.649 LINK spdk_nvme_identify 00:03:12.649 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:12.649 CC examples/ioat/verify/verify.o 00:03:12.649 CC examples/vmd/led/led.o 00:03:12.649 LINK spdk_nvme_perf 00:03:12.649 LINK idxd_perf 00:03:12.906 CXX test/cpp_headers/barrier.o 00:03:12.906 CXX test/cpp_headers/base64.o 00:03:12.906 LINK led 00:03:12.906 CXX test/cpp_headers/bdev.o 00:03:12.906 LINK interrupt_tgt 00:03:12.906 LINK test_dma 00:03:12.906 LINK verify 00:03:12.906 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:12.906 CC test/app/histogram_perf/histogram_perf.o 00:03:13.164 CXX test/cpp_headers/bdev_module.o 00:03:13.164 CC test/app/jsoncat/jsoncat.o 00:03:13.164 CC test/env/vtophys/vtophys.o 00:03:13.164 CC test/event/event_perf/event_perf.o 00:03:13.164 LINK histogram_perf 00:03:13.164 CC test/env/mem_callbacks/mem_callbacks.o 00:03:13.164 CC test/event/reactor/reactor.o 00:03:13.164 LINK jsoncat 00:03:13.164 LINK spdk_top 00:03:13.164 CXX test/cpp_headers/bdev_zone.o 00:03:13.164 LINK vtophys 00:03:13.164 CC examples/thread/thread/thread_ex.o 00:03:13.164 LINK event_perf 00:03:13.164 CXX test/cpp_headers/bit_array.o 00:03:13.422 LINK reactor 00:03:13.422 CXX test/cpp_headers/bit_pool.o 00:03:13.422 LINK nvme_fuzz 00:03:13.422 CC test/rpc_client/rpc_client_test.o 00:03:13.422 CC app/spdk_dd/spdk_dd.o 00:03:13.422 CXX test/cpp_headers/blob_bdev.o 00:03:13.422 LINK thread 00:03:13.422 CC test/event/reactor_perf/reactor_perf.o 00:03:13.422 CC examples/sock/hello_world/hello_sock.o 00:03:13.681 CC test/accel/dif/dif.o 00:03:13.681 CC test/blobfs/mkfs/mkfs.o 00:03:13.681 LINK rpc_client_test 00:03:13.681 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:13.681 CXX test/cpp_headers/blobfs_bdev.o 00:03:13.681 LINK reactor_perf 00:03:13.681 LINK mem_callbacks 00:03:13.681 CXX test/cpp_headers/blobfs.o 00:03:13.681 LINK mkfs 00:03:13.681 LINK hello_sock 00:03:13.681 LINK spdk_dd 00:03:13.939 CC test/event/app_repeat/app_repeat.o 00:03:13.939 CXX test/cpp_headers/blob.o 00:03:13.939 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:13.939 CC test/lvol/esnap/esnap.o 00:03:13.939 CC test/event/scheduler/scheduler.o 00:03:13.939 CXX test/cpp_headers/conf.o 00:03:13.939 LINK app_repeat 00:03:13.939 CC examples/accel/perf/accel_perf.o 00:03:13.939 LINK env_dpdk_post_init 00:03:13.939 CXX test/cpp_headers/config.o 00:03:14.198 CXX test/cpp_headers/cpuset.o 00:03:14.198 LINK scheduler 00:03:14.198 CC app/fio/nvme/fio_plugin.o 00:03:14.198 CC test/nvme/aer/aer.o 00:03:14.198 CXX test/cpp_headers/crc16.o 00:03:14.198 CC test/app/stub/stub.o 00:03:14.198 CC test/env/memory/memory_ut.o 00:03:14.198 LINK dif 00:03:14.456 CXX test/cpp_headers/crc32.o 00:03:14.456 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.456 LINK stub 00:03:14.456 LINK aer 00:03:14.456 CXX test/cpp_headers/crc64.o 00:03:14.456 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:14.456 LINK accel_perf 00:03:14.456 CXX test/cpp_headers/dif.o 00:03:14.456 CC test/nvme/reset/reset.o 00:03:14.715 CC test/nvme/sgl/sgl.o 00:03:14.715 CXX test/cpp_headers/dma.o 00:03:14.715 LINK spdk_nvme 00:03:14.715 CXX test/cpp_headers/endian.o 00:03:14.715 LINK reset 00:03:14.715 CC examples/blob/hello_world/hello_blob.o 00:03:14.972 LINK sgl 00:03:14.972 LINK vhost_fuzz 00:03:14.972 CXX test/cpp_headers/env_dpdk.o 00:03:14.972 CC test/bdev/bdevio/bdevio.o 00:03:14.972 CC app/fio/bdev/fio_plugin.o 00:03:14.972 CXX test/cpp_headers/env.o 00:03:14.972 LINK hello_blob 00:03:14.972 CXX test/cpp_headers/event.o 00:03:14.972 CC test/nvme/e2edp/nvme_dp.o 00:03:14.972 CC examples/blob/cli/blobcli.o 00:03:14.972 CC test/nvme/overhead/overhead.o 00:03:15.230 CXX test/cpp_headers/fd_group.o 00:03:15.230 CXX test/cpp_headers/fd.o 00:03:15.230 LINK bdevio 00:03:15.230 LINK iscsi_fuzz 00:03:15.230 LINK nvme_dp 00:03:15.230 LINK memory_ut 00:03:15.230 CXX test/cpp_headers/file.o 00:03:15.230 LINK spdk_bdev 00:03:15.488 CXX test/cpp_headers/fsdev.o 00:03:15.488 LINK overhead 00:03:15.488 CC test/nvme/err_injection/err_injection.o 00:03:15.488 CXX test/cpp_headers/fsdev_module.o 00:03:15.488 CXX test/cpp_headers/ftl.o 00:03:15.488 CXX test/cpp_headers/fuse_dispatcher.o 00:03:15.488 CC test/env/pci/pci_ut.o 00:03:15.488 CC test/nvme/startup/startup.o 00:03:15.488 CC app/vhost/vhost.o 00:03:15.488 LINK blobcli 00:03:15.488 LINK err_injection 00:03:15.488 CC test/nvme/reserve/reserve.o 00:03:15.488 CXX test/cpp_headers/gpt_spec.o 00:03:15.756 CC test/nvme/simple_copy/simple_copy.o 00:03:15.756 CC test/nvme/connect_stress/connect_stress.o 00:03:15.756 LINK startup 00:03:15.756 LINK vhost 00:03:15.756 CXX test/cpp_headers/hexlify.o 00:03:15.756 CC test/nvme/boot_partition/boot_partition.o 00:03:15.756 LINK reserve 00:03:15.756 CXX test/cpp_headers/histogram_data.o 00:03:15.756 LINK simple_copy 00:03:15.756 LINK connect_stress 00:03:15.756 CC examples/nvme/hello_world/hello_world.o 00:03:16.012 LINK pci_ut 00:03:16.012 CXX test/cpp_headers/idxd.o 00:03:16.012 LINK boot_partition 00:03:16.012 CXX test/cpp_headers/idxd_spec.o 00:03:16.012 CC test/nvme/compliance/nvme_compliance.o 00:03:16.012 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.012 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:16.012 CXX test/cpp_headers/init.o 00:03:16.012 LINK hello_world 00:03:16.012 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.012 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.272 CC examples/nvme/reconnect/reconnect.o 00:03:16.272 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.272 LINK fused_ordering 00:03:16.272 CXX test/cpp_headers/ioat.o 00:03:16.272 CXX test/cpp_headers/ioat_spec.o 00:03:16.272 LINK nvme_compliance 00:03:16.272 CXX test/cpp_headers/iscsi_spec.o 00:03:16.272 LINK doorbell_aers 00:03:16.272 LINK hello_fsdev 00:03:16.272 CXX test/cpp_headers/json.o 00:03:16.272 CXX test/cpp_headers/jsonrpc.o 00:03:16.272 LINK hello_bdev 00:03:16.272 CXX test/cpp_headers/keyring.o 00:03:16.532 CC test/nvme/fdp/fdp.o 00:03:16.532 CXX test/cpp_headers/keyring_module.o 00:03:16.532 CXX test/cpp_headers/likely.o 00:03:16.532 LINK reconnect 00:03:16.532 CXX test/cpp_headers/log.o 00:03:16.532 CC test/nvme/cuse/cuse.o 00:03:16.532 CXX test/cpp_headers/lvol.o 00:03:16.532 CXX test/cpp_headers/md5.o 00:03:16.532 CXX test/cpp_headers/memory.o 00:03:16.532 CXX test/cpp_headers/mmio.o 00:03:16.532 CXX test/cpp_headers/nbd.o 00:03:16.532 CXX test/cpp_headers/net.o 00:03:16.532 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.532 CXX test/cpp_headers/notify.o 00:03:16.790 CXX test/cpp_headers/nvme.o 00:03:16.790 LINK fdp 00:03:16.790 CXX test/cpp_headers/nvme_intel.o 00:03:16.790 CXX test/cpp_headers/nvme_ocssd.o 00:03:16.791 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:16.791 CXX test/cpp_headers/nvme_spec.o 00:03:16.791 CXX test/cpp_headers/nvme_zns.o 00:03:16.791 CXX test/cpp_headers/nvmf_cmd.o 00:03:16.791 CC examples/nvme/arbitration/arbitration.o 00:03:16.791 LINK bdevperf 00:03:17.048 CC examples/nvme/hotplug/hotplug.o 00:03:17.048 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:17.048 CC examples/nvme/abort/abort.o 00:03:17.048 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:17.048 LINK nvme_manage 00:03:17.048 CXX test/cpp_headers/nvmf.o 00:03:17.048 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:17.048 LINK cmb_copy 00:03:17.048 LINK arbitration 00:03:17.048 CXX test/cpp_headers/nvmf_spec.o 00:03:17.406 LINK hotplug 00:03:17.406 CXX test/cpp_headers/nvmf_transport.o 00:03:17.406 CXX test/cpp_headers/opal.o 00:03:17.406 CXX test/cpp_headers/opal_spec.o 00:03:17.406 CXX test/cpp_headers/pci_ids.o 00:03:17.406 LINK pmr_persistence 00:03:17.406 CXX test/cpp_headers/pipe.o 00:03:17.406 CXX test/cpp_headers/queue.o 00:03:17.406 CXX test/cpp_headers/reduce.o 00:03:17.406 CXX test/cpp_headers/rpc.o 00:03:17.406 CXX test/cpp_headers/scheduler.o 00:03:17.406 CXX test/cpp_headers/scsi.o 00:03:17.406 LINK abort 00:03:17.406 CXX test/cpp_headers/scsi_spec.o 00:03:17.406 CXX test/cpp_headers/sock.o 00:03:17.664 CXX test/cpp_headers/stdinc.o 00:03:17.664 CXX test/cpp_headers/string.o 00:03:17.664 CXX test/cpp_headers/thread.o 00:03:17.664 LINK cuse 00:03:17.664 CXX test/cpp_headers/trace.o 00:03:17.664 CXX test/cpp_headers/trace_parser.o 00:03:17.664 CXX test/cpp_headers/tree.o 00:03:17.664 CXX test/cpp_headers/ublk.o 00:03:17.664 CXX test/cpp_headers/util.o 00:03:17.664 CXX test/cpp_headers/uuid.o 00:03:17.664 CXX test/cpp_headers/version.o 00:03:17.664 CXX test/cpp_headers/vfio_user_pci.o 00:03:17.664 CXX test/cpp_headers/vfio_user_spec.o 00:03:17.664 CXX test/cpp_headers/vhost.o 00:03:17.664 CXX test/cpp_headers/vmd.o 00:03:17.664 CC examples/nvmf/nvmf/nvmf.o 00:03:17.664 CXX test/cpp_headers/xor.o 00:03:17.664 CXX test/cpp_headers/zipf.o 00:03:17.925 LINK nvmf 00:03:18.184 LINK esnap 00:03:18.442 00:03:18.442 real 1m5.543s 00:03:18.442 user 6m11.696s 00:03:18.442 sys 1m9.295s 00:03:18.442 23:24:06 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:18.442 ************************************ 00:03:18.442 END TEST make 00:03:18.442 ************************************ 00:03:18.442 23:24:06 make -- common/autotest_common.sh@10 -- $ set +x 00:03:18.700 23:24:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:18.700 23:24:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:18.700 23:24:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:18.700 23:24:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.700 23:24:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:18.700 23:24:06 -- pm/common@44 -- $ pid=5075 00:03:18.700 23:24:06 -- pm/common@50 -- $ kill -TERM 5075 00:03:18.700 23:24:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.700 23:24:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:18.700 23:24:06 -- pm/common@44 -- $ pid=5076 00:03:18.700 23:24:06 -- pm/common@50 -- $ kill -TERM 5076 00:03:18.700 23:24:06 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:18.700 23:24:06 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:18.700 23:24:06 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:18.700 23:24:06 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:18.700 23:24:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:18.700 23:24:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:18.700 23:24:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:18.700 23:24:06 -- scripts/common.sh@336 -- # IFS=.-: 00:03:18.700 23:24:06 -- scripts/common.sh@336 -- # read -ra ver1 00:03:18.700 23:24:06 -- scripts/common.sh@337 -- # IFS=.-: 00:03:18.700 23:24:06 -- scripts/common.sh@337 -- # read -ra ver2 00:03:18.700 23:24:06 -- scripts/common.sh@338 -- # local 'op=<' 00:03:18.700 23:24:06 -- scripts/common.sh@340 -- # ver1_l=2 00:03:18.700 23:24:06 -- scripts/common.sh@341 -- # ver2_l=1 00:03:18.701 23:24:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:18.701 23:24:06 -- scripts/common.sh@344 -- # case "$op" in 00:03:18.701 23:24:06 -- scripts/common.sh@345 -- # : 1 00:03:18.701 23:24:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:18.701 23:24:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:18.701 23:24:06 -- scripts/common.sh@365 -- # decimal 1 00:03:18.701 23:24:06 -- scripts/common.sh@353 -- # local d=1 00:03:18.701 23:24:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:18.701 23:24:06 -- scripts/common.sh@355 -- # echo 1 00:03:18.701 23:24:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:18.701 23:24:06 -- scripts/common.sh@366 -- # decimal 2 00:03:18.701 23:24:06 -- scripts/common.sh@353 -- # local d=2 00:03:18.701 23:24:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:18.701 23:24:06 -- scripts/common.sh@355 -- # echo 2 00:03:18.701 23:24:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:18.701 23:24:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:18.701 23:24:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:18.701 23:24:06 -- scripts/common.sh@368 -- # return 0 00:03:18.701 23:24:06 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:18.701 23:24:06 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:18.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.701 --rc genhtml_branch_coverage=1 00:03:18.701 --rc genhtml_function_coverage=1 00:03:18.701 --rc genhtml_legend=1 00:03:18.701 --rc geninfo_all_blocks=1 00:03:18.701 --rc geninfo_unexecuted_blocks=1 00:03:18.701 00:03:18.701 ' 00:03:18.701 23:24:06 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:18.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.701 --rc genhtml_branch_coverage=1 00:03:18.701 --rc genhtml_function_coverage=1 00:03:18.701 --rc genhtml_legend=1 00:03:18.701 --rc geninfo_all_blocks=1 00:03:18.701 --rc geninfo_unexecuted_blocks=1 00:03:18.701 00:03:18.701 ' 00:03:18.701 23:24:06 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:18.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.701 --rc genhtml_branch_coverage=1 00:03:18.701 --rc genhtml_function_coverage=1 00:03:18.701 --rc genhtml_legend=1 00:03:18.701 --rc geninfo_all_blocks=1 00:03:18.701 --rc geninfo_unexecuted_blocks=1 00:03:18.701 00:03:18.701 ' 00:03:18.701 23:24:06 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:18.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.701 --rc genhtml_branch_coverage=1 00:03:18.701 --rc genhtml_function_coverage=1 00:03:18.701 --rc genhtml_legend=1 00:03:18.701 --rc geninfo_all_blocks=1 00:03:18.701 --rc geninfo_unexecuted_blocks=1 00:03:18.701 00:03:18.701 ' 00:03:18.701 23:24:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:18.701 23:24:06 -- nvmf/common.sh@7 -- # uname -s 00:03:18.701 23:24:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:18.701 23:24:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:18.701 23:24:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:18.701 23:24:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:18.701 23:24:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:18.701 23:24:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:18.701 23:24:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:18.701 23:24:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:18.701 23:24:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:18.701 23:24:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:18.701 23:24:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5ac6952c-8883-403a-8f1d-45bf473106db 00:03:18.701 23:24:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5ac6952c-8883-403a-8f1d-45bf473106db 00:03:18.701 23:24:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:18.701 23:24:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:18.701 23:24:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:18.701 23:24:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:18.701 23:24:06 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:18.701 23:24:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:18.701 23:24:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:18.701 23:24:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:18.701 23:24:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:18.701 23:24:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.701 23:24:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.701 23:24:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.701 23:24:06 -- paths/export.sh@5 -- # export PATH 00:03:18.701 23:24:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.701 23:24:06 -- nvmf/common.sh@51 -- # : 0 00:03:18.701 23:24:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:18.701 23:24:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:18.701 23:24:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:18.701 23:24:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:18.701 23:24:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:18.701 23:24:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:18.701 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:18.701 23:24:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:18.701 23:24:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:18.701 23:24:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:18.701 23:24:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:18.701 23:24:06 -- spdk/autotest.sh@32 -- # uname -s 00:03:18.701 23:24:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:18.701 23:24:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:18.701 23:24:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:18.701 23:24:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:18.701 23:24:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:18.701 23:24:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:18.701 23:24:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:18.701 23:24:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:18.701 23:24:06 -- spdk/autotest.sh@48 -- # udevadm_pid=54643 00:03:18.701 23:24:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:18.701 23:24:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:18.701 23:24:06 -- pm/common@17 -- # local monitor 00:03:18.701 23:24:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.701 23:24:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.701 23:24:06 -- pm/common@25 -- # sleep 1 00:03:18.701 23:24:06 -- pm/common@21 -- # date +%s 00:03:18.701 23:24:06 -- pm/common@21 -- # date +%s 00:03:18.701 23:24:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727565846 00:03:18.701 23:24:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727565846 00:03:18.701 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727565846_collect-cpu-load.pm.log 00:03:18.701 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727565846_collect-vmstat.pm.log 00:03:20.075 23:24:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:20.075 23:24:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:20.075 23:24:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:20.075 23:24:07 -- common/autotest_common.sh@10 -- # set +x 00:03:20.075 23:24:07 -- spdk/autotest.sh@59 -- # create_test_list 00:03:20.075 23:24:07 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:20.075 23:24:07 -- common/autotest_common.sh@10 -- # set +x 00:03:20.075 23:24:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:20.075 23:24:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:20.075 23:24:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:20.075 23:24:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:20.075 23:24:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:20.075 23:24:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:20.075 23:24:07 -- common/autotest_common.sh@1455 -- # uname 00:03:20.075 23:24:07 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:20.075 23:24:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:20.075 23:24:07 -- common/autotest_common.sh@1475 -- # uname 00:03:20.075 23:24:07 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:20.075 23:24:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:20.075 23:24:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:20.075 lcov: LCOV version 1.15 00:03:20.075 23:24:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:34.948 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:34.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:47.168 23:24:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:47.168 23:24:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.168 23:24:35 -- common/autotest_common.sh@10 -- # set +x 00:03:47.168 23:24:35 -- spdk/autotest.sh@78 -- # rm -f 00:03:47.168 23:24:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.312 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:48.312 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:48.312 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:48.312 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:48.312 23:24:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:48.312 23:24:36 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:48.312 23:24:36 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:48.312 23:24:36 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:48.312 23:24:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:48.312 23:24:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:48.312 23:24:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:48.312 23:24:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:48.312 23:24:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:48.312 23:24:36 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:48.312 23:24:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:48.312 23:24:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:48.312 23:24:36 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:48.312 23:24:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:48.312 23:24:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:03:48.312 23:24:36 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:03:48.312 23:24:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:48.312 23:24:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:03:48.312 23:24:36 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:03:48.312 23:24:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:48.312 23:24:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:03:48.312 23:24:36 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:03:48.312 23:24:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:48.312 23:24:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:03:48.312 23:24:36 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:03:48.312 23:24:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:48.312 23:24:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:48.312 23:24:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:48.312 23:24:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.312 23:24:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.312 23:24:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:48.312 23:24:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:48.312 23:24:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:48.312 No valid GPT data, bailing 00:03:48.312 23:24:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:48.312 23:24:36 -- scripts/common.sh@394 -- # pt= 00:03:48.312 23:24:36 -- scripts/common.sh@395 -- # return 1 00:03:48.312 23:24:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:48.312 1+0 records in 00:03:48.312 1+0 records out 00:03:48.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266579 s, 39.3 MB/s 00:03:48.312 23:24:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.312 23:24:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.312 23:24:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:48.312 23:24:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:48.312 23:24:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:48.312 No valid GPT data, bailing 00:03:48.312 23:24:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:48.312 23:24:36 -- scripts/common.sh@394 -- # pt= 00:03:48.312 23:24:36 -- scripts/common.sh@395 -- # return 1 00:03:48.312 23:24:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:48.312 1+0 records in 00:03:48.312 1+0 records out 00:03:48.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605816 s, 173 MB/s 00:03:48.312 23:24:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.312 23:24:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.312 23:24:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:48.312 23:24:36 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:48.312 23:24:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:48.573 No valid GPT data, bailing 00:03:48.573 23:24:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:48.573 23:24:36 -- scripts/common.sh@394 -- # pt= 00:03:48.573 23:24:36 -- scripts/common.sh@395 -- # return 1 00:03:48.573 23:24:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:48.573 1+0 records in 00:03:48.573 1+0 records out 00:03:48.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450149 s, 233 MB/s 00:03:48.573 23:24:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.573 23:24:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.573 23:24:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:48.573 23:24:36 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:48.573 23:24:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:48.573 No valid GPT data, bailing 00:03:48.573 23:24:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:48.573 23:24:36 -- scripts/common.sh@394 -- # pt= 00:03:48.573 23:24:36 -- scripts/common.sh@395 -- # return 1 00:03:48.573 23:24:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:48.573 1+0 records in 00:03:48.573 1+0 records out 00:03:48.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621309 s, 169 MB/s 00:03:48.573 23:24:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.573 23:24:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.573 23:24:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:48.573 23:24:36 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:48.573 23:24:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:48.573 No valid GPT data, bailing 00:03:48.573 23:24:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:48.573 23:24:36 -- scripts/common.sh@394 -- # pt= 00:03:48.573 23:24:36 -- scripts/common.sh@395 -- # return 1 00:03:48.573 23:24:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:48.573 1+0 records in 00:03:48.573 1+0 records out 00:03:48.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680158 s, 154 MB/s 00:03:48.574 23:24:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.574 23:24:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:48.574 23:24:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:48.574 23:24:36 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:48.574 23:24:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:48.835 No valid GPT data, bailing 00:03:48.835 23:24:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:48.835 23:24:36 -- scripts/common.sh@394 -- # pt= 00:03:48.835 23:24:36 -- scripts/common.sh@395 -- # return 1 00:03:48.835 23:24:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:48.835 1+0 records in 00:03:48.835 1+0 records out 00:03:48.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618666 s, 169 MB/s 00:03:48.835 23:24:36 -- spdk/autotest.sh@105 -- # sync 00:03:48.835 23:24:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:48.835 23:24:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:48.835 23:24:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:50.750 23:24:38 -- spdk/autotest.sh@111 -- # uname -s 00:03:50.750 23:24:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:50.750 23:24:38 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:50.750 23:24:38 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:51.022 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.596 Hugepages 00:03:51.596 node hugesize free / total 00:03:51.596 node0 1048576kB 0 / 0 00:03:51.596 node0 2048kB 0 / 0 00:03:51.596 00:03:51.596 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.596 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:51.858 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:51.858 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:51.858 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:51.858 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:51.858 23:24:40 -- spdk/autotest.sh@117 -- # uname -s 00:03:51.858 23:24:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:51.858 23:24:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:51.858 23:24:40 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.003 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.003 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.003 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.264 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.264 23:24:41 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:54.208 23:24:42 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:54.208 23:24:42 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:54.208 23:24:42 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:54.208 23:24:42 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:54.208 23:24:42 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:54.208 23:24:42 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:54.208 23:24:42 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.208 23:24:42 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:54.208 23:24:42 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:54.208 23:24:42 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:03:54.208 23:24:42 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:54.208 23:24:42 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:54.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.732 Waiting for block devices as requested 00:03:54.732 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.993 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.994 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.994 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.286 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:00.286 23:24:48 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:00.286 23:24:48 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:00.286 23:24:48 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:00.286 23:24:48 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:00.286 23:24:48 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:00.286 23:24:48 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:00.286 23:24:48 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:00.286 23:24:48 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:00.286 23:24:48 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1541 -- # continue 00:04:00.286 23:24:48 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:00.286 23:24:48 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:00.286 23:24:48 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:00.286 23:24:48 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:00.286 23:24:48 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:00.286 23:24:48 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:00.286 23:24:48 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:00.286 23:24:48 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:00.286 23:24:48 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1541 -- # continue 00:04:00.286 23:24:48 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:00.286 23:24:48 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:00.286 23:24:48 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:00.286 23:24:48 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:00.286 23:24:48 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1541 -- # continue 00:04:00.286 23:24:48 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:00.286 23:24:48 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:00.286 23:24:48 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:00.286 23:24:48 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:00.286 23:24:48 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:00.286 23:24:48 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:00.286 23:24:48 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:00.286 23:24:48 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:00.286 23:24:48 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:00.287 23:24:48 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:00.287 23:24:48 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:00.287 23:24:48 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:00.287 23:24:48 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:00.287 23:24:48 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:00.287 23:24:48 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:00.287 23:24:48 -- common/autotest_common.sh@1541 -- # continue 00:04:00.287 23:24:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:00.287 23:24:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:00.287 23:24:48 -- common/autotest_common.sh@10 -- # set +x 00:04:00.287 23:24:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:00.287 23:24:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.287 23:24:48 -- common/autotest_common.sh@10 -- # set +x 00:04:00.287 23:24:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.428 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.428 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.428 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.428 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.428 23:24:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:01.428 23:24:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.428 23:24:49 -- common/autotest_common.sh@10 -- # set +x 00:04:01.428 23:24:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:01.428 23:24:49 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:01.428 23:24:49 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.428 23:24:49 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:01.428 23:24:49 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:01.428 23:24:49 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:01.428 23:24:49 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:01.428 23:24:49 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:01.428 23:24:49 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:01.428 23:24:49 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:01.428 23:24:49 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.428 23:24:49 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:01.428 23:24:49 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.428 23:24:49 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:01.428 23:24:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:01.428 23:24:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:01.428 23:24:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:01.428 23:24:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:01.428 23:24:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:01.428 23:24:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:01.428 23:24:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:01.429 23:24:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:01.429 23:24:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:01.429 23:24:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:01.429 23:24:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:01.429 23:24:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:01.429 23:24:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:01.429 23:24:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:01.429 23:24:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:01.429 23:24:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:01.429 23:24:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:01.429 23:24:49 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:01.429 23:24:49 -- common/autotest_common.sh@1570 -- # return 0 00:04:01.429 23:24:49 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:01.429 23:24:49 -- common/autotest_common.sh@1578 -- # return 0 00:04:01.429 23:24:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:01.429 23:24:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:01.429 23:24:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:01.429 23:24:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:01.429 23:24:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:01.429 23:24:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:01.429 23:24:49 -- common/autotest_common.sh@10 -- # set +x 00:04:01.429 23:24:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:01.429 23:24:49 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:01.429 23:24:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.429 23:24:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.429 23:24:49 -- common/autotest_common.sh@10 -- # set +x 00:04:01.429 ************************************ 00:04:01.429 START TEST env 00:04:01.429 ************************************ 00:04:01.429 23:24:49 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:01.690 * Looking for test storage... 00:04:01.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:01.690 23:24:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.690 23:24:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.690 23:24:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.690 23:24:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.690 23:24:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.690 23:24:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.690 23:24:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.690 23:24:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.690 23:24:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.690 23:24:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.690 23:24:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.690 23:24:49 env -- scripts/common.sh@344 -- # case "$op" in 00:04:01.690 23:24:49 env -- scripts/common.sh@345 -- # : 1 00:04:01.690 23:24:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.690 23:24:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.690 23:24:49 env -- scripts/common.sh@365 -- # decimal 1 00:04:01.690 23:24:49 env -- scripts/common.sh@353 -- # local d=1 00:04:01.690 23:24:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.690 23:24:49 env -- scripts/common.sh@355 -- # echo 1 00:04:01.690 23:24:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.690 23:24:49 env -- scripts/common.sh@366 -- # decimal 2 00:04:01.690 23:24:49 env -- scripts/common.sh@353 -- # local d=2 00:04:01.690 23:24:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.690 23:24:49 env -- scripts/common.sh@355 -- # echo 2 00:04:01.690 23:24:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.690 23:24:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.690 23:24:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.690 23:24:49 env -- scripts/common.sh@368 -- # return 0 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:01.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.690 --rc genhtml_branch_coverage=1 00:04:01.690 --rc genhtml_function_coverage=1 00:04:01.690 --rc genhtml_legend=1 00:04:01.690 --rc geninfo_all_blocks=1 00:04:01.690 --rc geninfo_unexecuted_blocks=1 00:04:01.690 00:04:01.690 ' 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:01.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.690 --rc genhtml_branch_coverage=1 00:04:01.690 --rc genhtml_function_coverage=1 00:04:01.690 --rc genhtml_legend=1 00:04:01.690 --rc geninfo_all_blocks=1 00:04:01.690 --rc geninfo_unexecuted_blocks=1 00:04:01.690 00:04:01.690 ' 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:01.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.690 --rc genhtml_branch_coverage=1 00:04:01.690 --rc genhtml_function_coverage=1 00:04:01.690 --rc genhtml_legend=1 00:04:01.690 --rc geninfo_all_blocks=1 00:04:01.690 --rc geninfo_unexecuted_blocks=1 00:04:01.690 00:04:01.690 ' 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:01.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.690 --rc genhtml_branch_coverage=1 00:04:01.690 --rc genhtml_function_coverage=1 00:04:01.690 --rc genhtml_legend=1 00:04:01.690 --rc geninfo_all_blocks=1 00:04:01.690 --rc geninfo_unexecuted_blocks=1 00:04:01.690 00:04:01.690 ' 00:04:01.690 23:24:49 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.690 23:24:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.690 23:24:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.690 ************************************ 00:04:01.690 START TEST env_memory 00:04:01.690 ************************************ 00:04:01.690 23:24:49 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:01.690 00:04:01.690 00:04:01.690 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.690 http://cunit.sourceforge.net/ 00:04:01.690 00:04:01.690 00:04:01.690 Suite: memory 00:04:01.690 Test: alloc and free memory map ...[2024-09-28 23:24:49.774063] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:01.690 passed 00:04:01.690 Test: mem map translation ...[2024-09-28 23:24:49.812659] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:01.690 [2024-09-28 23:24:49.812689] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:01.690 [2024-09-28 23:24:49.812746] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:01.690 [2024-09-28 23:24:49.812760] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:01.951 passed 00:04:01.951 Test: mem map registration ...[2024-09-28 23:24:49.880819] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:01.951 [2024-09-28 23:24:49.880857] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:01.952 passed 00:04:01.952 Test: mem map adjacent registrations ...passed 00:04:01.952 00:04:01.952 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.952 suites 1 1 n/a 0 0 00:04:01.952 tests 4 4 4 0 0 00:04:01.952 asserts 152 152 152 0 n/a 00:04:01.952 00:04:01.952 Elapsed time = 0.232 seconds 00:04:01.952 00:04:01.952 real 0m0.267s 00:04:01.952 user 0m0.237s 00:04:01.952 sys 0m0.021s 00:04:01.952 23:24:49 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.952 23:24:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:01.952 ************************************ 00:04:01.952 END TEST env_memory 00:04:01.952 ************************************ 00:04:01.952 23:24:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:01.952 23:24:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.952 23:24:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.952 23:24:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.952 ************************************ 00:04:01.952 START TEST env_vtophys 00:04:01.952 ************************************ 00:04:01.952 23:24:50 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:01.952 EAL: lib.eal log level changed from notice to debug 00:04:01.952 EAL: Detected lcore 0 as core 0 on socket 0 00:04:01.952 EAL: Detected lcore 1 as core 0 on socket 0 00:04:01.952 EAL: Detected lcore 2 as core 0 on socket 0 00:04:01.952 EAL: Detected lcore 3 as core 0 on socket 0 00:04:01.952 EAL: Detected lcore 4 as core 0 on socket 0 00:04:01.952 EAL: Detected lcore 5 as core 0 on socket 0 00:04:01.952 EAL: Detected lcore 6 as core 0 on socket 0 00:04:01.952 EAL: Detected lcore 7 as core 0 on socket 0 00:04:01.952 EAL: Detected lcore 8 as core 0 on socket 0 00:04:01.952 EAL: Detected lcore 9 as core 0 on socket 0 00:04:01.952 EAL: Maximum logical cores by configuration: 128 00:04:01.952 EAL: Detected CPU lcores: 10 00:04:01.952 EAL: Detected NUMA nodes: 1 00:04:01.952 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:01.952 EAL: Detected shared linkage of DPDK 00:04:01.952 EAL: No shared files mode enabled, IPC will be disabled 00:04:01.952 EAL: Selected IOVA mode 'PA' 00:04:01.952 EAL: Probing VFIO support... 00:04:01.952 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:01.952 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:01.952 EAL: Ask a virtual area of 0x2e000 bytes 00:04:01.952 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:01.952 EAL: Setting up physically contiguous memory... 00:04:01.952 EAL: Setting maximum number of open files to 524288 00:04:01.952 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:01.952 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:01.952 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.952 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:01.952 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.952 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.952 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:01.952 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:01.952 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.952 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:01.952 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.952 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.952 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:01.952 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:01.952 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.952 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:01.952 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.952 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.952 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:01.952 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:01.952 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.952 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:01.952 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.952 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.952 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:01.952 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:01.952 EAL: Hugepages will be freed exactly as allocated. 00:04:01.952 EAL: No shared files mode enabled, IPC is disabled 00:04:01.952 EAL: No shared files mode enabled, IPC is disabled 00:04:02.213 EAL: TSC frequency is ~2600000 KHz 00:04:02.213 EAL: Main lcore 0 is ready (tid=7f7f00e42a40;cpuset=[0]) 00:04:02.213 EAL: Trying to obtain current memory policy. 00:04:02.213 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.213 EAL: Restoring previous memory policy: 0 00:04:02.213 EAL: request: mp_malloc_sync 00:04:02.213 EAL: No shared files mode enabled, IPC is disabled 00:04:02.213 EAL: Heap on socket 0 was expanded by 2MB 00:04:02.213 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:02.213 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:02.213 EAL: Mem event callback 'spdk:(nil)' registered 00:04:02.213 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:02.213 00:04:02.213 00:04:02.213 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.213 http://cunit.sourceforge.net/ 00:04:02.213 00:04:02.213 00:04:02.213 Suite: components_suite 00:04:02.473 Test: vtophys_malloc_test ...passed 00:04:02.474 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:02.474 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.474 EAL: Restoring previous memory policy: 4 00:04:02.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.474 EAL: request: mp_malloc_sync 00:04:02.474 EAL: No shared files mode enabled, IPC is disabled 00:04:02.474 EAL: Heap on socket 0 was expanded by 4MB 00:04:02.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.474 EAL: request: mp_malloc_sync 00:04:02.474 EAL: No shared files mode enabled, IPC is disabled 00:04:02.474 EAL: Heap on socket 0 was shrunk by 4MB 00:04:02.474 EAL: Trying to obtain current memory policy. 00:04:02.474 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.474 EAL: Restoring previous memory policy: 4 00:04:02.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.474 EAL: request: mp_malloc_sync 00:04:02.474 EAL: No shared files mode enabled, IPC is disabled 00:04:02.474 EAL: Heap on socket 0 was expanded by 6MB 00:04:02.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.474 EAL: request: mp_malloc_sync 00:04:02.474 EAL: No shared files mode enabled, IPC is disabled 00:04:02.474 EAL: Heap on socket 0 was shrunk by 6MB 00:04:02.474 EAL: Trying to obtain current memory policy. 00:04:02.474 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.474 EAL: Restoring previous memory policy: 4 00:04:02.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.474 EAL: request: mp_malloc_sync 00:04:02.474 EAL: No shared files mode enabled, IPC is disabled 00:04:02.474 EAL: Heap on socket 0 was expanded by 10MB 00:04:02.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.474 EAL: request: mp_malloc_sync 00:04:02.474 EAL: No shared files mode enabled, IPC is disabled 00:04:02.474 EAL: Heap on socket 0 was shrunk by 10MB 00:04:02.474 EAL: Trying to obtain current memory policy. 00:04:02.474 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.474 EAL: Restoring previous memory policy: 4 00:04:02.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.474 EAL: request: mp_malloc_sync 00:04:02.474 EAL: No shared files mode enabled, IPC is disabled 00:04:02.474 EAL: Heap on socket 0 was expanded by 18MB 00:04:02.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.474 EAL: request: mp_malloc_sync 00:04:02.474 EAL: No shared files mode enabled, IPC is disabled 00:04:02.474 EAL: Heap on socket 0 was shrunk by 18MB 00:04:02.474 EAL: Trying to obtain current memory policy. 00:04:02.474 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.474 EAL: Restoring previous memory policy: 4 00:04:02.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.474 EAL: request: mp_malloc_sync 00:04:02.474 EAL: No shared files mode enabled, IPC is disabled 00:04:02.474 EAL: Heap on socket 0 was expanded by 34MB 00:04:02.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.474 EAL: request: mp_malloc_sync 00:04:02.474 EAL: No shared files mode enabled, IPC is disabled 00:04:02.474 EAL: Heap on socket 0 was shrunk by 34MB 00:04:02.734 EAL: Trying to obtain current memory policy. 00:04:02.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.734 EAL: Restoring previous memory policy: 4 00:04:02.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.734 EAL: request: mp_malloc_sync 00:04:02.734 EAL: No shared files mode enabled, IPC is disabled 00:04:02.734 EAL: Heap on socket 0 was expanded by 66MB 00:04:02.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.734 EAL: request: mp_malloc_sync 00:04:02.734 EAL: No shared files mode enabled, IPC is disabled 00:04:02.734 EAL: Heap on socket 0 was shrunk by 66MB 00:04:02.734 EAL: Trying to obtain current memory policy. 00:04:02.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.734 EAL: Restoring previous memory policy: 4 00:04:02.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.734 EAL: request: mp_malloc_sync 00:04:02.734 EAL: No shared files mode enabled, IPC is disabled 00:04:02.734 EAL: Heap on socket 0 was expanded by 130MB 00:04:02.995 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.995 EAL: request: mp_malloc_sync 00:04:02.995 EAL: No shared files mode enabled, IPC is disabled 00:04:02.995 EAL: Heap on socket 0 was shrunk by 130MB 00:04:02.995 EAL: Trying to obtain current memory policy. 00:04:02.995 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.995 EAL: Restoring previous memory policy: 4 00:04:02.995 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.995 EAL: request: mp_malloc_sync 00:04:02.995 EAL: No shared files mode enabled, IPC is disabled 00:04:02.995 EAL: Heap on socket 0 was expanded by 258MB 00:04:03.256 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.516 EAL: request: mp_malloc_sync 00:04:03.516 EAL: No shared files mode enabled, IPC is disabled 00:04:03.516 EAL: Heap on socket 0 was shrunk by 258MB 00:04:03.776 EAL: Trying to obtain current memory policy. 00:04:03.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.776 EAL: Restoring previous memory policy: 4 00:04:03.776 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.776 EAL: request: mp_malloc_sync 00:04:03.776 EAL: No shared files mode enabled, IPC is disabled 00:04:03.776 EAL: Heap on socket 0 was expanded by 514MB 00:04:04.347 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.347 EAL: request: mp_malloc_sync 00:04:04.347 EAL: No shared files mode enabled, IPC is disabled 00:04:04.347 EAL: Heap on socket 0 was shrunk by 514MB 00:04:04.918 EAL: Trying to obtain current memory policy. 00:04:04.918 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.918 EAL: Restoring previous memory policy: 4 00:04:04.918 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.918 EAL: request: mp_malloc_sync 00:04:04.918 EAL: No shared files mode enabled, IPC is disabled 00:04:04.918 EAL: Heap on socket 0 was expanded by 1026MB 00:04:06.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.297 EAL: request: mp_malloc_sync 00:04:06.297 EAL: No shared files mode enabled, IPC is disabled 00:04:06.297 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:07.238 passed 00:04:07.238 00:04:07.238 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.238 suites 1 1 n/a 0 0 00:04:07.238 tests 2 2 2 0 0 00:04:07.238 asserts 5663 5663 5663 0 n/a 00:04:07.238 00:04:07.238 Elapsed time = 4.881 seconds 00:04:07.238 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.238 EAL: request: mp_malloc_sync 00:04:07.238 EAL: No shared files mode enabled, IPC is disabled 00:04:07.238 EAL: Heap on socket 0 was shrunk by 2MB 00:04:07.238 EAL: No shared files mode enabled, IPC is disabled 00:04:07.238 EAL: No shared files mode enabled, IPC is disabled 00:04:07.238 EAL: No shared files mode enabled, IPC is disabled 00:04:07.238 ************************************ 00:04:07.238 END TEST env_vtophys 00:04:07.238 ************************************ 00:04:07.238 00:04:07.238 real 0m5.131s 00:04:07.238 user 0m4.381s 00:04:07.238 sys 0m0.603s 00:04:07.238 23:24:55 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.238 23:24:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:07.238 23:24:55 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:07.238 23:24:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.238 23:24:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.238 23:24:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.238 ************************************ 00:04:07.238 START TEST env_pci 00:04:07.238 ************************************ 00:04:07.238 23:24:55 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:07.238 00:04:07.238 00:04:07.238 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.238 http://cunit.sourceforge.net/ 00:04:07.238 00:04:07.238 00:04:07.238 Suite: pci 00:04:07.238 Test: pci_hook ...[2024-09-28 23:24:55.227537] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57403 has claimed it 00:04:07.238 passed 00:04:07.238 00:04:07.239 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.239 suites 1 1 n/a 0 0 00:04:07.239 tests 1 1 1 0 0 00:04:07.239 asserts 25 25 25 0 n/a 00:04:07.239 00:04:07.239 Elapsed time = 0.004 seconds 00:04:07.239 EAL: Cannot find device (10000:00:01.0) 00:04:07.239 EAL: Failed to attach device on primary process 00:04:07.239 00:04:07.239 real 0m0.060s 00:04:07.239 user 0m0.032s 00:04:07.239 sys 0m0.026s 00:04:07.239 23:24:55 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.239 23:24:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:07.239 ************************************ 00:04:07.239 END TEST env_pci 00:04:07.239 ************************************ 00:04:07.239 23:24:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:07.239 23:24:55 env -- env/env.sh@15 -- # uname 00:04:07.239 23:24:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:07.239 23:24:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:07.239 23:24:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.239 23:24:55 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:07.239 23:24:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.239 23:24:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.239 ************************************ 00:04:07.239 START TEST env_dpdk_post_init 00:04:07.239 ************************************ 00:04:07.239 23:24:55 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.239 EAL: Detected CPU lcores: 10 00:04:07.239 EAL: Detected NUMA nodes: 1 00:04:07.239 EAL: Detected shared linkage of DPDK 00:04:07.239 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.239 EAL: Selected IOVA mode 'PA' 00:04:07.500 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.500 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:07.500 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:07.500 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:07.500 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:07.500 Starting DPDK initialization... 00:04:07.500 Starting SPDK post initialization... 00:04:07.500 SPDK NVMe probe 00:04:07.500 Attaching to 0000:00:10.0 00:04:07.500 Attaching to 0000:00:11.0 00:04:07.500 Attaching to 0000:00:12.0 00:04:07.500 Attaching to 0000:00:13.0 00:04:07.500 Attached to 0000:00:10.0 00:04:07.500 Attached to 0000:00:11.0 00:04:07.500 Attached to 0000:00:13.0 00:04:07.500 Attached to 0000:00:12.0 00:04:07.500 Cleaning up... 00:04:07.500 ************************************ 00:04:07.500 END TEST env_dpdk_post_init 00:04:07.500 ************************************ 00:04:07.500 00:04:07.500 real 0m0.224s 00:04:07.500 user 0m0.063s 00:04:07.500 sys 0m0.063s 00:04:07.500 23:24:55 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.500 23:24:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:07.500 23:24:55 env -- env/env.sh@26 -- # uname 00:04:07.500 23:24:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:07.500 23:24:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.500 23:24:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.500 23:24:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.500 23:24:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.500 ************************************ 00:04:07.500 START TEST env_mem_callbacks 00:04:07.500 ************************************ 00:04:07.500 23:24:55 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.500 EAL: Detected CPU lcores: 10 00:04:07.500 EAL: Detected NUMA nodes: 1 00:04:07.500 EAL: Detected shared linkage of DPDK 00:04:07.500 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.500 EAL: Selected IOVA mode 'PA' 00:04:07.761 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.761 00:04:07.761 00:04:07.761 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.761 http://cunit.sourceforge.net/ 00:04:07.761 00:04:07.761 00:04:07.761 Suite: memory 00:04:07.761 Test: test ... 00:04:07.761 register 0x200000200000 2097152 00:04:07.761 malloc 3145728 00:04:07.761 register 0x200000400000 4194304 00:04:07.761 buf 0x2000004fffc0 len 3145728 PASSED 00:04:07.761 malloc 64 00:04:07.761 buf 0x2000004ffec0 len 64 PASSED 00:04:07.761 malloc 4194304 00:04:07.761 register 0x200000800000 6291456 00:04:07.761 buf 0x2000009fffc0 len 4194304 PASSED 00:04:07.761 free 0x2000004fffc0 3145728 00:04:07.761 free 0x2000004ffec0 64 00:04:07.761 unregister 0x200000400000 4194304 PASSED 00:04:07.761 free 0x2000009fffc0 4194304 00:04:07.761 unregister 0x200000800000 6291456 PASSED 00:04:07.761 malloc 8388608 00:04:07.761 register 0x200000400000 10485760 00:04:07.761 buf 0x2000005fffc0 len 8388608 PASSED 00:04:07.761 free 0x2000005fffc0 8388608 00:04:07.761 unregister 0x200000400000 10485760 PASSED 00:04:07.761 passed 00:04:07.761 00:04:07.761 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.761 suites 1 1 n/a 0 0 00:04:07.761 tests 1 1 1 0 0 00:04:07.761 asserts 15 15 15 0 n/a 00:04:07.761 00:04:07.761 Elapsed time = 0.039 seconds 00:04:07.761 00:04:07.761 real 0m0.205s 00:04:07.761 user 0m0.056s 00:04:07.761 sys 0m0.047s 00:04:07.761 23:24:55 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.761 23:24:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:07.761 ************************************ 00:04:07.761 END TEST env_mem_callbacks 00:04:07.761 ************************************ 00:04:07.761 ************************************ 00:04:07.761 END TEST env 00:04:07.761 ************************************ 00:04:07.761 00:04:07.761 real 0m6.233s 00:04:07.761 user 0m4.916s 00:04:07.761 sys 0m0.958s 00:04:07.761 23:24:55 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.761 23:24:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.761 23:24:55 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:07.761 23:24:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.761 23:24:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.761 23:24:55 -- common/autotest_common.sh@10 -- # set +x 00:04:07.761 ************************************ 00:04:07.761 START TEST rpc 00:04:07.761 ************************************ 00:04:07.761 23:24:55 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:07.761 * Looking for test storage... 00:04:07.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.761 23:24:55 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:07.761 23:24:55 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:07.762 23:24:55 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:08.023 23:24:55 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:08.023 23:24:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.023 23:24:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.023 23:24:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.023 23:24:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.023 23:24:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.023 23:24:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.023 23:24:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.023 23:24:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.023 23:24:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.023 23:24:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.023 23:24:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.023 23:24:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.023 23:24:55 rpc -- scripts/common.sh@345 -- # : 1 00:04:08.023 23:24:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.023 23:24:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.023 23:24:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.023 23:24:55 rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.023 23:24:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.023 23:24:55 rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.023 23:24:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.023 23:24:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.023 23:24:55 rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.023 23:24:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.023 23:24:55 rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.023 23:24:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.023 23:24:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.024 23:24:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.024 23:24:55 rpc -- scripts/common.sh@368 -- # return 0 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.024 --rc genhtml_branch_coverage=1 00:04:08.024 --rc genhtml_function_coverage=1 00:04:08.024 --rc genhtml_legend=1 00:04:08.024 --rc geninfo_all_blocks=1 00:04:08.024 --rc geninfo_unexecuted_blocks=1 00:04:08.024 00:04:08.024 ' 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.024 --rc genhtml_branch_coverage=1 00:04:08.024 --rc genhtml_function_coverage=1 00:04:08.024 --rc genhtml_legend=1 00:04:08.024 --rc geninfo_all_blocks=1 00:04:08.024 --rc geninfo_unexecuted_blocks=1 00:04:08.024 00:04:08.024 ' 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.024 --rc genhtml_branch_coverage=1 00:04:08.024 --rc genhtml_function_coverage=1 00:04:08.024 --rc genhtml_legend=1 00:04:08.024 --rc geninfo_all_blocks=1 00:04:08.024 --rc geninfo_unexecuted_blocks=1 00:04:08.024 00:04:08.024 ' 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.024 --rc genhtml_branch_coverage=1 00:04:08.024 --rc genhtml_function_coverage=1 00:04:08.024 --rc genhtml_legend=1 00:04:08.024 --rc geninfo_all_blocks=1 00:04:08.024 --rc geninfo_unexecuted_blocks=1 00:04:08.024 00:04:08.024 ' 00:04:08.024 23:24:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57530 00:04:08.024 23:24:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.024 23:24:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57530 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@831 -- # '[' -z 57530 ']' 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:08.024 23:24:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.024 23:24:55 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:08.024 [2024-09-28 23:24:56.048442] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:08.024 [2024-09-28 23:24:56.048713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57530 ] 00:04:08.285 [2024-09-28 23:24:56.197749] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.285 [2024-09-28 23:24:56.337985] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:08.285 [2024-09-28 23:24:56.338028] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57530' to capture a snapshot of events at runtime. 00:04:08.285 [2024-09-28 23:24:56.338036] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:08.285 [2024-09-28 23:24:56.338043] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:08.285 [2024-09-28 23:24:56.338050] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57530 for offline analysis/debug. 00:04:08.285 [2024-09-28 23:24:56.338079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.859 23:24:56 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:08.859 23:24:56 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:08.859 23:24:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.859 23:24:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.859 23:24:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.859 23:24:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.859 23:24:56 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.859 23:24:56 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.859 23:24:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.859 ************************************ 00:04:08.859 START TEST rpc_integrity 00:04:08.859 ************************************ 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.859 { 00:04:08.859 "name": "Malloc0", 00:04:08.859 "aliases": [ 00:04:08.859 "9aa4d6fe-7b88-4254-be92-bef305f4650c" 00:04:08.859 ], 00:04:08.859 "product_name": "Malloc disk", 00:04:08.859 "block_size": 512, 00:04:08.859 "num_blocks": 16384, 00:04:08.859 "uuid": "9aa4d6fe-7b88-4254-be92-bef305f4650c", 00:04:08.859 "assigned_rate_limits": { 00:04:08.859 "rw_ios_per_sec": 0, 00:04:08.859 "rw_mbytes_per_sec": 0, 00:04:08.859 "r_mbytes_per_sec": 0, 00:04:08.859 "w_mbytes_per_sec": 0 00:04:08.859 }, 00:04:08.859 "claimed": false, 00:04:08.859 "zoned": false, 00:04:08.859 "supported_io_types": { 00:04:08.859 "read": true, 00:04:08.859 "write": true, 00:04:08.859 "unmap": true, 00:04:08.859 "flush": true, 00:04:08.859 "reset": true, 00:04:08.859 "nvme_admin": false, 00:04:08.859 "nvme_io": false, 00:04:08.859 "nvme_io_md": false, 00:04:08.859 "write_zeroes": true, 00:04:08.859 "zcopy": true, 00:04:08.859 "get_zone_info": false, 00:04:08.859 "zone_management": false, 00:04:08.859 "zone_append": false, 00:04:08.859 "compare": false, 00:04:08.859 "compare_and_write": false, 00:04:08.859 "abort": true, 00:04:08.859 "seek_hole": false, 00:04:08.859 "seek_data": false, 00:04:08.859 "copy": true, 00:04:08.859 "nvme_iov_md": false 00:04:08.859 }, 00:04:08.859 "memory_domains": [ 00:04:08.859 { 00:04:08.859 "dma_device_id": "system", 00:04:08.859 "dma_device_type": 1 00:04:08.859 }, 00:04:08.859 { 00:04:08.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.859 "dma_device_type": 2 00:04:08.859 } 00:04:08.859 ], 00:04:08.859 "driver_specific": {} 00:04:08.859 } 00:04:08.859 ]' 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.859 [2024-09-28 23:24:56.983262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:08.859 [2024-09-28 23:24:56.983308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.859 [2024-09-28 23:24:56.983330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:08.859 [2024-09-28 23:24:56.983340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.859 [2024-09-28 23:24:56.985059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.859 [2024-09-28 23:24:56.985092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.859 Passthru0 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.859 23:24:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.859 23:24:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.859 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.859 23:24:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.859 { 00:04:08.859 "name": "Malloc0", 00:04:08.859 "aliases": [ 00:04:08.859 "9aa4d6fe-7b88-4254-be92-bef305f4650c" 00:04:08.859 ], 00:04:08.859 "product_name": "Malloc disk", 00:04:08.859 "block_size": 512, 00:04:08.859 "num_blocks": 16384, 00:04:08.859 "uuid": "9aa4d6fe-7b88-4254-be92-bef305f4650c", 00:04:08.859 "assigned_rate_limits": { 00:04:08.859 "rw_ios_per_sec": 0, 00:04:08.859 "rw_mbytes_per_sec": 0, 00:04:08.859 "r_mbytes_per_sec": 0, 00:04:08.859 "w_mbytes_per_sec": 0 00:04:08.859 }, 00:04:08.859 "claimed": true, 00:04:08.859 "claim_type": "exclusive_write", 00:04:08.859 "zoned": false, 00:04:08.859 "supported_io_types": { 00:04:08.859 "read": true, 00:04:08.859 "write": true, 00:04:08.859 "unmap": true, 00:04:08.859 "flush": true, 00:04:08.859 "reset": true, 00:04:08.859 "nvme_admin": false, 00:04:08.859 "nvme_io": false, 00:04:08.859 "nvme_io_md": false, 00:04:08.859 "write_zeroes": true, 00:04:08.859 "zcopy": true, 00:04:08.859 "get_zone_info": false, 00:04:08.859 "zone_management": false, 00:04:08.859 "zone_append": false, 00:04:08.859 "compare": false, 00:04:08.859 "compare_and_write": false, 00:04:08.860 "abort": true, 00:04:08.860 "seek_hole": false, 00:04:08.860 "seek_data": false, 00:04:08.860 "copy": true, 00:04:08.860 "nvme_iov_md": false 00:04:08.860 }, 00:04:08.860 "memory_domains": [ 00:04:08.860 { 00:04:08.860 "dma_device_id": "system", 00:04:08.860 "dma_device_type": 1 00:04:08.860 }, 00:04:08.860 { 00:04:08.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.860 "dma_device_type": 2 00:04:08.860 } 00:04:08.860 ], 00:04:08.860 "driver_specific": {} 00:04:08.860 }, 00:04:08.860 { 00:04:08.860 "name": "Passthru0", 00:04:08.860 "aliases": [ 00:04:08.860 "e40e99e8-6b28-534f-8153-10f9d6ea01ef" 00:04:08.860 ], 00:04:08.860 "product_name": "passthru", 00:04:08.860 "block_size": 512, 00:04:08.860 "num_blocks": 16384, 00:04:08.860 "uuid": "e40e99e8-6b28-534f-8153-10f9d6ea01ef", 00:04:08.860 "assigned_rate_limits": { 00:04:08.860 "rw_ios_per_sec": 0, 00:04:08.860 "rw_mbytes_per_sec": 0, 00:04:08.860 "r_mbytes_per_sec": 0, 00:04:08.860 "w_mbytes_per_sec": 0 00:04:08.860 }, 00:04:08.860 "claimed": false, 00:04:08.860 "zoned": false, 00:04:08.860 "supported_io_types": { 00:04:08.860 "read": true, 00:04:08.860 "write": true, 00:04:08.860 "unmap": true, 00:04:08.860 "flush": true, 00:04:08.860 "reset": true, 00:04:08.860 "nvme_admin": false, 00:04:08.860 "nvme_io": false, 00:04:08.860 "nvme_io_md": false, 00:04:08.860 "write_zeroes": true, 00:04:08.860 "zcopy": true, 00:04:08.860 "get_zone_info": false, 00:04:08.860 "zone_management": false, 00:04:08.860 "zone_append": false, 00:04:08.860 "compare": false, 00:04:08.860 "compare_and_write": false, 00:04:08.860 "abort": true, 00:04:08.860 "seek_hole": false, 00:04:08.860 "seek_data": false, 00:04:08.860 "copy": true, 00:04:08.860 "nvme_iov_md": false 00:04:08.860 }, 00:04:08.860 "memory_domains": [ 00:04:08.860 { 00:04:08.860 "dma_device_id": "system", 00:04:08.860 "dma_device_type": 1 00:04:08.860 }, 00:04:08.860 { 00:04:08.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.860 "dma_device_type": 2 00:04:08.860 } 00:04:08.860 ], 00:04:08.860 "driver_specific": { 00:04:08.860 "passthru": { 00:04:08.860 "name": "Passthru0", 00:04:08.860 "base_bdev_name": "Malloc0" 00:04:08.860 } 00:04:08.860 } 00:04:08.860 } 00:04:08.860 ]' 00:04:08.860 23:24:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.121 23:24:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.121 23:24:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.121 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.122 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.122 23:24:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:09.122 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.122 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.122 23:24:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.122 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.122 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.122 23:24:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.122 23:24:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.122 ************************************ 00:04:09.122 END TEST rpc_integrity 00:04:09.122 ************************************ 00:04:09.122 23:24:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.122 00:04:09.122 real 0m0.229s 00:04:09.122 user 0m0.123s 00:04:09.122 sys 0m0.032s 00:04:09.122 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.122 23:24:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 23:24:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:09.122 23:24:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.122 23:24:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.122 23:24:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 ************************************ 00:04:09.122 START TEST rpc_plugins 00:04:09.122 ************************************ 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:09.122 { 00:04:09.122 "name": "Malloc1", 00:04:09.122 "aliases": [ 00:04:09.122 "357dbd2d-eb66-4ee3-9472-4d28ba44c772" 00:04:09.122 ], 00:04:09.122 "product_name": "Malloc disk", 00:04:09.122 "block_size": 4096, 00:04:09.122 "num_blocks": 256, 00:04:09.122 "uuid": "357dbd2d-eb66-4ee3-9472-4d28ba44c772", 00:04:09.122 "assigned_rate_limits": { 00:04:09.122 "rw_ios_per_sec": 0, 00:04:09.122 "rw_mbytes_per_sec": 0, 00:04:09.122 "r_mbytes_per_sec": 0, 00:04:09.122 "w_mbytes_per_sec": 0 00:04:09.122 }, 00:04:09.122 "claimed": false, 00:04:09.122 "zoned": false, 00:04:09.122 "supported_io_types": { 00:04:09.122 "read": true, 00:04:09.122 "write": true, 00:04:09.122 "unmap": true, 00:04:09.122 "flush": true, 00:04:09.122 "reset": true, 00:04:09.122 "nvme_admin": false, 00:04:09.122 "nvme_io": false, 00:04:09.122 "nvme_io_md": false, 00:04:09.122 "write_zeroes": true, 00:04:09.122 "zcopy": true, 00:04:09.122 "get_zone_info": false, 00:04:09.122 "zone_management": false, 00:04:09.122 "zone_append": false, 00:04:09.122 "compare": false, 00:04:09.122 "compare_and_write": false, 00:04:09.122 "abort": true, 00:04:09.122 "seek_hole": false, 00:04:09.122 "seek_data": false, 00:04:09.122 "copy": true, 00:04:09.122 "nvme_iov_md": false 00:04:09.122 }, 00:04:09.122 "memory_domains": [ 00:04:09.122 { 00:04:09.122 "dma_device_id": "system", 00:04:09.122 "dma_device_type": 1 00:04:09.122 }, 00:04:09.122 { 00:04:09.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.122 "dma_device_type": 2 00:04:09.122 } 00:04:09.122 ], 00:04:09.122 "driver_specific": {} 00:04:09.122 } 00:04:09.122 ]' 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:09.122 ************************************ 00:04:09.122 END TEST rpc_plugins 00:04:09.122 ************************************ 00:04:09.122 23:24:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:09.122 00:04:09.122 real 0m0.109s 00:04:09.122 user 0m0.064s 00:04:09.122 sys 0m0.017s 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.122 23:24:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.122 23:24:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:09.122 23:24:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.122 23:24:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.122 23:24:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.384 ************************************ 00:04:09.384 START TEST rpc_trace_cmd_test 00:04:09.384 ************************************ 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:09.384 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57530", 00:04:09.384 "tpoint_group_mask": "0x8", 00:04:09.384 "iscsi_conn": { 00:04:09.384 "mask": "0x2", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "scsi": { 00:04:09.384 "mask": "0x4", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "bdev": { 00:04:09.384 "mask": "0x8", 00:04:09.384 "tpoint_mask": "0xffffffffffffffff" 00:04:09.384 }, 00:04:09.384 "nvmf_rdma": { 00:04:09.384 "mask": "0x10", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "nvmf_tcp": { 00:04:09.384 "mask": "0x20", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "ftl": { 00:04:09.384 "mask": "0x40", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "blobfs": { 00:04:09.384 "mask": "0x80", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "dsa": { 00:04:09.384 "mask": "0x200", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "thread": { 00:04:09.384 "mask": "0x400", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "nvme_pcie": { 00:04:09.384 "mask": "0x800", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "iaa": { 00:04:09.384 "mask": "0x1000", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "nvme_tcp": { 00:04:09.384 "mask": "0x2000", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "bdev_nvme": { 00:04:09.384 "mask": "0x4000", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "sock": { 00:04:09.384 "mask": "0x8000", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "blob": { 00:04:09.384 "mask": "0x10000", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 }, 00:04:09.384 "bdev_raid": { 00:04:09.384 "mask": "0x20000", 00:04:09.384 "tpoint_mask": "0x0" 00:04:09.384 } 00:04:09.384 }' 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.384 ************************************ 00:04:09.384 END TEST rpc_trace_cmd_test 00:04:09.384 ************************************ 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.384 00:04:09.384 real 0m0.176s 00:04:09.384 user 0m0.137s 00:04:09.384 sys 0m0.028s 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.384 23:24:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.384 23:24:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.384 23:24:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.384 23:24:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.384 23:24:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.384 23:24:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.384 23:24:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.384 ************************************ 00:04:09.384 START TEST rpc_daemon_integrity 00:04:09.384 ************************************ 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.384 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.646 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.646 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:09.646 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.646 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.647 { 00:04:09.647 "name": "Malloc2", 00:04:09.647 "aliases": [ 00:04:09.647 "af190307-d1c1-4189-96d3-45e372e784ca" 00:04:09.647 ], 00:04:09.647 "product_name": "Malloc disk", 00:04:09.647 "block_size": 512, 00:04:09.647 "num_blocks": 16384, 00:04:09.647 "uuid": "af190307-d1c1-4189-96d3-45e372e784ca", 00:04:09.647 "assigned_rate_limits": { 00:04:09.647 "rw_ios_per_sec": 0, 00:04:09.647 "rw_mbytes_per_sec": 0, 00:04:09.647 "r_mbytes_per_sec": 0, 00:04:09.647 "w_mbytes_per_sec": 0 00:04:09.647 }, 00:04:09.647 "claimed": false, 00:04:09.647 "zoned": false, 00:04:09.647 "supported_io_types": { 00:04:09.647 "read": true, 00:04:09.647 "write": true, 00:04:09.647 "unmap": true, 00:04:09.647 "flush": true, 00:04:09.647 "reset": true, 00:04:09.647 "nvme_admin": false, 00:04:09.647 "nvme_io": false, 00:04:09.647 "nvme_io_md": false, 00:04:09.647 "write_zeroes": true, 00:04:09.647 "zcopy": true, 00:04:09.647 "get_zone_info": false, 00:04:09.647 "zone_management": false, 00:04:09.647 "zone_append": false, 00:04:09.647 "compare": false, 00:04:09.647 "compare_and_write": false, 00:04:09.647 "abort": true, 00:04:09.647 "seek_hole": false, 00:04:09.647 "seek_data": false, 00:04:09.647 "copy": true, 00:04:09.647 "nvme_iov_md": false 00:04:09.647 }, 00:04:09.647 "memory_domains": [ 00:04:09.647 { 00:04:09.647 "dma_device_id": "system", 00:04:09.647 "dma_device_type": 1 00:04:09.647 }, 00:04:09.647 { 00:04:09.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.647 "dma_device_type": 2 00:04:09.647 } 00:04:09.647 ], 00:04:09.647 "driver_specific": {} 00:04:09.647 } 00:04:09.647 ]' 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.647 [2024-09-28 23:24:57.603455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:09.647 [2024-09-28 23:24:57.603501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.647 [2024-09-28 23:24:57.603524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:09.647 [2024-09-28 23:24:57.603533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.647 [2024-09-28 23:24:57.605234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.647 [2024-09-28 23:24:57.605265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.647 Passthru0 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.647 { 00:04:09.647 "name": "Malloc2", 00:04:09.647 "aliases": [ 00:04:09.647 "af190307-d1c1-4189-96d3-45e372e784ca" 00:04:09.647 ], 00:04:09.647 "product_name": "Malloc disk", 00:04:09.647 "block_size": 512, 00:04:09.647 "num_blocks": 16384, 00:04:09.647 "uuid": "af190307-d1c1-4189-96d3-45e372e784ca", 00:04:09.647 "assigned_rate_limits": { 00:04:09.647 "rw_ios_per_sec": 0, 00:04:09.647 "rw_mbytes_per_sec": 0, 00:04:09.647 "r_mbytes_per_sec": 0, 00:04:09.647 "w_mbytes_per_sec": 0 00:04:09.647 }, 00:04:09.647 "claimed": true, 00:04:09.647 "claim_type": "exclusive_write", 00:04:09.647 "zoned": false, 00:04:09.647 "supported_io_types": { 00:04:09.647 "read": true, 00:04:09.647 "write": true, 00:04:09.647 "unmap": true, 00:04:09.647 "flush": true, 00:04:09.647 "reset": true, 00:04:09.647 "nvme_admin": false, 00:04:09.647 "nvme_io": false, 00:04:09.647 "nvme_io_md": false, 00:04:09.647 "write_zeroes": true, 00:04:09.647 "zcopy": true, 00:04:09.647 "get_zone_info": false, 00:04:09.647 "zone_management": false, 00:04:09.647 "zone_append": false, 00:04:09.647 "compare": false, 00:04:09.647 "compare_and_write": false, 00:04:09.647 "abort": true, 00:04:09.647 "seek_hole": false, 00:04:09.647 "seek_data": false, 00:04:09.647 "copy": true, 00:04:09.647 "nvme_iov_md": false 00:04:09.647 }, 00:04:09.647 "memory_domains": [ 00:04:09.647 { 00:04:09.647 "dma_device_id": "system", 00:04:09.647 "dma_device_type": 1 00:04:09.647 }, 00:04:09.647 { 00:04:09.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.647 "dma_device_type": 2 00:04:09.647 } 00:04:09.647 ], 00:04:09.647 "driver_specific": {} 00:04:09.647 }, 00:04:09.647 { 00:04:09.647 "name": "Passthru0", 00:04:09.647 "aliases": [ 00:04:09.647 "7154c4fa-7605-5122-8750-e6abf47ab009" 00:04:09.647 ], 00:04:09.647 "product_name": "passthru", 00:04:09.647 "block_size": 512, 00:04:09.647 "num_blocks": 16384, 00:04:09.647 "uuid": "7154c4fa-7605-5122-8750-e6abf47ab009", 00:04:09.647 "assigned_rate_limits": { 00:04:09.647 "rw_ios_per_sec": 0, 00:04:09.647 "rw_mbytes_per_sec": 0, 00:04:09.647 "r_mbytes_per_sec": 0, 00:04:09.647 "w_mbytes_per_sec": 0 00:04:09.647 }, 00:04:09.647 "claimed": false, 00:04:09.647 "zoned": false, 00:04:09.647 "supported_io_types": { 00:04:09.647 "read": true, 00:04:09.647 "write": true, 00:04:09.647 "unmap": true, 00:04:09.647 "flush": true, 00:04:09.647 "reset": true, 00:04:09.647 "nvme_admin": false, 00:04:09.647 "nvme_io": false, 00:04:09.647 "nvme_io_md": false, 00:04:09.647 "write_zeroes": true, 00:04:09.647 "zcopy": true, 00:04:09.647 "get_zone_info": false, 00:04:09.647 "zone_management": false, 00:04:09.647 "zone_append": false, 00:04:09.647 "compare": false, 00:04:09.647 "compare_and_write": false, 00:04:09.647 "abort": true, 00:04:09.647 "seek_hole": false, 00:04:09.647 "seek_data": false, 00:04:09.647 "copy": true, 00:04:09.647 "nvme_iov_md": false 00:04:09.647 }, 00:04:09.647 "memory_domains": [ 00:04:09.647 { 00:04:09.647 "dma_device_id": "system", 00:04:09.647 "dma_device_type": 1 00:04:09.647 }, 00:04:09.647 { 00:04:09.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.647 "dma_device_type": 2 00:04:09.647 } 00:04:09.647 ], 00:04:09.647 "driver_specific": { 00:04:09.647 "passthru": { 00:04:09.647 "name": "Passthru0", 00:04:09.647 "base_bdev_name": "Malloc2" 00:04:09.647 } 00:04:09.647 } 00:04:09.647 } 00:04:09.647 ]' 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.647 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.648 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.648 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.648 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.648 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.648 ************************************ 00:04:09.648 END TEST rpc_daemon_integrity 00:04:09.648 ************************************ 00:04:09.648 23:24:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.648 00:04:09.648 real 0m0.223s 00:04:09.648 user 0m0.118s 00:04:09.648 sys 0m0.039s 00:04:09.648 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.648 23:24:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.648 23:24:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.648 23:24:57 rpc -- rpc/rpc.sh@84 -- # killprocess 57530 00:04:09.648 23:24:57 rpc -- common/autotest_common.sh@950 -- # '[' -z 57530 ']' 00:04:09.648 23:24:57 rpc -- common/autotest_common.sh@954 -- # kill -0 57530 00:04:09.648 23:24:57 rpc -- common/autotest_common.sh@955 -- # uname 00:04:09.648 23:24:57 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:09.648 23:24:57 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57530 00:04:09.648 23:24:57 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:09.648 killing process with pid 57530 00:04:09.648 23:24:57 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:09.648 23:24:57 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57530' 00:04:09.648 23:24:57 rpc -- common/autotest_common.sh@969 -- # kill 57530 00:04:09.648 23:24:57 rpc -- common/autotest_common.sh@974 -- # wait 57530 00:04:11.036 00:04:11.036 real 0m3.196s 00:04:11.036 user 0m3.625s 00:04:11.036 sys 0m0.561s 00:04:11.036 ************************************ 00:04:11.036 END TEST rpc 00:04:11.036 ************************************ 00:04:11.036 23:24:59 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.036 23:24:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.036 23:24:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:11.036 23:24:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.036 23:24:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.036 23:24:59 -- common/autotest_common.sh@10 -- # set +x 00:04:11.036 ************************************ 00:04:11.036 START TEST skip_rpc 00:04:11.036 ************************************ 00:04:11.036 23:24:59 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:11.036 * Looking for test storage... 00:04:11.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.036 23:24:59 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:11.036 23:24:59 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:11.036 23:24:59 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:11.036 23:24:59 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:11.036 23:24:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.036 23:24:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.036 23:24:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.036 23:24:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:11.037 23:24:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.301 23:24:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:11.301 23:24:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:11.301 23:24:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.301 23:24:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:11.301 23:24:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.301 23:24:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.301 23:24:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.301 23:24:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:11.301 23:24:59 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.301 23:24:59 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:11.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.301 --rc genhtml_branch_coverage=1 00:04:11.301 --rc genhtml_function_coverage=1 00:04:11.301 --rc genhtml_legend=1 00:04:11.301 --rc geninfo_all_blocks=1 00:04:11.301 --rc geninfo_unexecuted_blocks=1 00:04:11.301 00:04:11.301 ' 00:04:11.301 23:24:59 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:11.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.301 --rc genhtml_branch_coverage=1 00:04:11.301 --rc genhtml_function_coverage=1 00:04:11.301 --rc genhtml_legend=1 00:04:11.301 --rc geninfo_all_blocks=1 00:04:11.301 --rc geninfo_unexecuted_blocks=1 00:04:11.301 00:04:11.301 ' 00:04:11.301 23:24:59 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:11.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.301 --rc genhtml_branch_coverage=1 00:04:11.301 --rc genhtml_function_coverage=1 00:04:11.301 --rc genhtml_legend=1 00:04:11.301 --rc geninfo_all_blocks=1 00:04:11.301 --rc geninfo_unexecuted_blocks=1 00:04:11.301 00:04:11.301 ' 00:04:11.301 23:24:59 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:11.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.301 --rc genhtml_branch_coverage=1 00:04:11.301 --rc genhtml_function_coverage=1 00:04:11.301 --rc genhtml_legend=1 00:04:11.301 --rc geninfo_all_blocks=1 00:04:11.301 --rc geninfo_unexecuted_blocks=1 00:04:11.301 00:04:11.301 ' 00:04:11.301 23:24:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.301 23:24:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.301 23:24:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:11.301 23:24:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.301 23:24:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.301 23:24:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.301 ************************************ 00:04:11.301 START TEST skip_rpc 00:04:11.301 ************************************ 00:04:11.301 23:24:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:11.301 23:24:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57737 00:04:11.301 23:24:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.301 23:24:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:11.301 23:24:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:11.301 [2024-09-28 23:24:59.275359] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:11.301 [2024-09-28 23:24:59.275450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57737 ] 00:04:11.301 [2024-09-28 23:24:59.418202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.562 [2024-09-28 23:24:59.597220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.854 23:25:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:16.854 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:16.854 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:16.854 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:16.854 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57737 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57737 ']' 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57737 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57737 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.855 killing process with pid 57737 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57737' 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57737 00:04:16.855 23:25:04 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57737 00:04:17.428 00:04:17.428 real 0m6.257s 00:04:17.428 user 0m5.894s 00:04:17.428 sys 0m0.258s 00:04:17.428 ************************************ 00:04:17.428 END TEST skip_rpc 00:04:17.428 ************************************ 00:04:17.428 23:25:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.428 23:25:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.428 23:25:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:17.428 23:25:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.428 23:25:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.428 23:25:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.428 ************************************ 00:04:17.428 START TEST skip_rpc_with_json 00:04:17.428 ************************************ 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57834 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57834 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57834 ']' 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:17.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:17.428 23:25:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.428 [2024-09-28 23:25:05.586214] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:17.428 [2024-09-28 23:25:05.586336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57834 ] 00:04:17.690 [2024-09-28 23:25:05.736417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.955 [2024-09-28 23:25:05.913885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.560 [2024-09-28 23:25:06.511026] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:18.560 request: 00:04:18.560 { 00:04:18.560 "trtype": "tcp", 00:04:18.560 "method": "nvmf_get_transports", 00:04:18.560 "req_id": 1 00:04:18.560 } 00:04:18.560 Got JSON-RPC error response 00:04:18.560 response: 00:04:18.560 { 00:04:18.560 "code": -19, 00:04:18.560 "message": "No such device" 00:04:18.560 } 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.560 [2024-09-28 23:25:06.519123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:18.560 23:25:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:18.560 { 00:04:18.560 "subsystems": [ 00:04:18.560 { 00:04:18.560 "subsystem": "fsdev", 00:04:18.560 "config": [ 00:04:18.560 { 00:04:18.560 "method": "fsdev_set_opts", 00:04:18.560 "params": { 00:04:18.560 "fsdev_io_pool_size": 65535, 00:04:18.560 "fsdev_io_cache_size": 256 00:04:18.560 } 00:04:18.560 } 00:04:18.560 ] 00:04:18.560 }, 00:04:18.560 { 00:04:18.560 "subsystem": "keyring", 00:04:18.560 "config": [] 00:04:18.560 }, 00:04:18.560 { 00:04:18.560 "subsystem": "iobuf", 00:04:18.560 "config": [ 00:04:18.560 { 00:04:18.560 "method": "iobuf_set_options", 00:04:18.560 "params": { 00:04:18.560 "small_pool_count": 8192, 00:04:18.560 "large_pool_count": 1024, 00:04:18.560 "small_bufsize": 8192, 00:04:18.560 "large_bufsize": 135168 00:04:18.560 } 00:04:18.560 } 00:04:18.560 ] 00:04:18.560 }, 00:04:18.560 { 00:04:18.560 "subsystem": "sock", 00:04:18.560 "config": [ 00:04:18.560 { 00:04:18.560 "method": "sock_set_default_impl", 00:04:18.560 "params": { 00:04:18.560 "impl_name": "posix" 00:04:18.560 } 00:04:18.560 }, 00:04:18.560 { 00:04:18.560 "method": "sock_impl_set_options", 00:04:18.560 "params": { 00:04:18.560 "impl_name": "ssl", 00:04:18.560 "recv_buf_size": 4096, 00:04:18.560 "send_buf_size": 4096, 00:04:18.560 "enable_recv_pipe": true, 00:04:18.560 "enable_quickack": false, 00:04:18.560 "enable_placement_id": 0, 00:04:18.560 "enable_zerocopy_send_server": true, 00:04:18.560 "enable_zerocopy_send_client": false, 00:04:18.560 "zerocopy_threshold": 0, 00:04:18.560 "tls_version": 0, 00:04:18.560 "enable_ktls": false 00:04:18.560 } 00:04:18.560 }, 00:04:18.560 { 00:04:18.560 "method": "sock_impl_set_options", 00:04:18.561 "params": { 00:04:18.561 "impl_name": "posix", 00:04:18.561 "recv_buf_size": 2097152, 00:04:18.561 "send_buf_size": 2097152, 00:04:18.561 "enable_recv_pipe": true, 00:04:18.561 "enable_quickack": false, 00:04:18.561 "enable_placement_id": 0, 00:04:18.561 "enable_zerocopy_send_server": true, 00:04:18.561 "enable_zerocopy_send_client": false, 00:04:18.561 "zerocopy_threshold": 0, 00:04:18.561 "tls_version": 0, 00:04:18.561 "enable_ktls": false 00:04:18.561 } 00:04:18.561 } 00:04:18.561 ] 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "vmd", 00:04:18.561 "config": [] 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "accel", 00:04:18.561 "config": [ 00:04:18.561 { 00:04:18.561 "method": "accel_set_options", 00:04:18.561 "params": { 00:04:18.561 "small_cache_size": 128, 00:04:18.561 "large_cache_size": 16, 00:04:18.561 "task_count": 2048, 00:04:18.561 "sequence_count": 2048, 00:04:18.561 "buf_count": 2048 00:04:18.561 } 00:04:18.561 } 00:04:18.561 ] 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "bdev", 00:04:18.561 "config": [ 00:04:18.561 { 00:04:18.561 "method": "bdev_set_options", 00:04:18.561 "params": { 00:04:18.561 "bdev_io_pool_size": 65535, 00:04:18.561 "bdev_io_cache_size": 256, 00:04:18.561 "bdev_auto_examine": true, 00:04:18.561 "iobuf_small_cache_size": 128, 00:04:18.561 "iobuf_large_cache_size": 16 00:04:18.561 } 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "method": "bdev_raid_set_options", 00:04:18.561 "params": { 00:04:18.561 "process_window_size_kb": 1024, 00:04:18.561 "process_max_bandwidth_mb_sec": 0 00:04:18.561 } 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "method": "bdev_iscsi_set_options", 00:04:18.561 "params": { 00:04:18.561 "timeout_sec": 30 00:04:18.561 } 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "method": "bdev_nvme_set_options", 00:04:18.561 "params": { 00:04:18.561 "action_on_timeout": "none", 00:04:18.561 "timeout_us": 0, 00:04:18.561 "timeout_admin_us": 0, 00:04:18.561 "keep_alive_timeout_ms": 10000, 00:04:18.561 "arbitration_burst": 0, 00:04:18.561 "low_priority_weight": 0, 00:04:18.561 "medium_priority_weight": 0, 00:04:18.561 "high_priority_weight": 0, 00:04:18.561 "nvme_adminq_poll_period_us": 10000, 00:04:18.561 "nvme_ioq_poll_period_us": 0, 00:04:18.561 "io_queue_requests": 0, 00:04:18.561 "delay_cmd_submit": true, 00:04:18.561 "transport_retry_count": 4, 00:04:18.561 "bdev_retry_count": 3, 00:04:18.561 "transport_ack_timeout": 0, 00:04:18.561 "ctrlr_loss_timeout_sec": 0, 00:04:18.561 "reconnect_delay_sec": 0, 00:04:18.561 "fast_io_fail_timeout_sec": 0, 00:04:18.561 "disable_auto_failback": false, 00:04:18.561 "generate_uuids": false, 00:04:18.561 "transport_tos": 0, 00:04:18.561 "nvme_error_stat": false, 00:04:18.561 "rdma_srq_size": 0, 00:04:18.561 "io_path_stat": false, 00:04:18.561 "allow_accel_sequence": false, 00:04:18.561 "rdma_max_cq_size": 0, 00:04:18.561 "rdma_cm_event_timeout_ms": 0, 00:04:18.561 "dhchap_digests": [ 00:04:18.561 "sha256", 00:04:18.561 "sha384", 00:04:18.561 "sha512" 00:04:18.561 ], 00:04:18.561 "dhchap_dhgroups": [ 00:04:18.561 "null", 00:04:18.561 "ffdhe2048", 00:04:18.561 "ffdhe3072", 00:04:18.561 "ffdhe4096", 00:04:18.561 "ffdhe6144", 00:04:18.561 "ffdhe8192" 00:04:18.561 ] 00:04:18.561 } 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "method": "bdev_nvme_set_hotplug", 00:04:18.561 "params": { 00:04:18.561 "period_us": 100000, 00:04:18.561 "enable": false 00:04:18.561 } 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "method": "bdev_wait_for_examine" 00:04:18.561 } 00:04:18.561 ] 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "scsi", 00:04:18.561 "config": null 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "scheduler", 00:04:18.561 "config": [ 00:04:18.561 { 00:04:18.561 "method": "framework_set_scheduler", 00:04:18.561 "params": { 00:04:18.561 "name": "static" 00:04:18.561 } 00:04:18.561 } 00:04:18.561 ] 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "vhost_scsi", 00:04:18.561 "config": [] 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "vhost_blk", 00:04:18.561 "config": [] 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "ublk", 00:04:18.561 "config": [] 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "nbd", 00:04:18.561 "config": [] 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "nvmf", 00:04:18.561 "config": [ 00:04:18.561 { 00:04:18.561 "method": "nvmf_set_config", 00:04:18.561 "params": { 00:04:18.561 "discovery_filter": "match_any", 00:04:18.561 "admin_cmd_passthru": { 00:04:18.561 "identify_ctrlr": false 00:04:18.561 }, 00:04:18.561 "dhchap_digests": [ 00:04:18.561 "sha256", 00:04:18.561 "sha384", 00:04:18.561 "sha512" 00:04:18.561 ], 00:04:18.561 "dhchap_dhgroups": [ 00:04:18.561 "null", 00:04:18.561 "ffdhe2048", 00:04:18.561 "ffdhe3072", 00:04:18.561 "ffdhe4096", 00:04:18.561 "ffdhe6144", 00:04:18.561 "ffdhe8192" 00:04:18.561 ] 00:04:18.561 } 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "method": "nvmf_set_max_subsystems", 00:04:18.561 "params": { 00:04:18.561 "max_subsystems": 1024 00:04:18.561 } 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "method": "nvmf_set_crdt", 00:04:18.561 "params": { 00:04:18.561 "crdt1": 0, 00:04:18.561 "crdt2": 0, 00:04:18.561 "crdt3": 0 00:04:18.561 } 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "method": "nvmf_create_transport", 00:04:18.561 "params": { 00:04:18.561 "trtype": "TCP", 00:04:18.561 "max_queue_depth": 128, 00:04:18.561 "max_io_qpairs_per_ctrlr": 127, 00:04:18.561 "in_capsule_data_size": 4096, 00:04:18.561 "max_io_size": 131072, 00:04:18.561 "io_unit_size": 131072, 00:04:18.561 "max_aq_depth": 128, 00:04:18.561 "num_shared_buffers": 511, 00:04:18.561 "buf_cache_size": 4294967295, 00:04:18.561 "dif_insert_or_strip": false, 00:04:18.561 "zcopy": false, 00:04:18.561 "c2h_success": true, 00:04:18.561 "sock_priority": 0, 00:04:18.561 "abort_timeout_sec": 1, 00:04:18.561 "ack_timeout": 0, 00:04:18.561 "data_wr_pool_size": 0 00:04:18.561 } 00:04:18.561 } 00:04:18.561 ] 00:04:18.561 }, 00:04:18.561 { 00:04:18.561 "subsystem": "iscsi", 00:04:18.561 "config": [ 00:04:18.561 { 00:04:18.561 "method": "iscsi_set_options", 00:04:18.561 "params": { 00:04:18.561 "node_base": "iqn.2016-06.io.spdk", 00:04:18.561 "max_sessions": 128, 00:04:18.561 "max_connections_per_session": 2, 00:04:18.561 "max_queue_depth": 64, 00:04:18.561 "default_time2wait": 2, 00:04:18.561 "default_time2retain": 20, 00:04:18.561 "first_burst_length": 8192, 00:04:18.561 "immediate_data": true, 00:04:18.561 "allow_duplicated_isid": false, 00:04:18.561 "error_recovery_level": 0, 00:04:18.561 "nop_timeout": 60, 00:04:18.561 "nop_in_interval": 30, 00:04:18.561 "disable_chap": false, 00:04:18.561 "require_chap": false, 00:04:18.561 "mutual_chap": false, 00:04:18.561 "chap_group": 0, 00:04:18.561 "max_large_datain_per_connection": 64, 00:04:18.561 "max_r2t_per_connection": 4, 00:04:18.561 "pdu_pool_size": 36864, 00:04:18.561 "immediate_data_pool_size": 16384, 00:04:18.561 "data_out_pool_size": 2048 00:04:18.561 } 00:04:18.561 } 00:04:18.561 ] 00:04:18.561 } 00:04:18.561 ] 00:04:18.561 } 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57834 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57834 ']' 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57834 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57834 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:18.561 killing process with pid 57834 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57834' 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57834 00:04:18.561 23:25:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57834 00:04:19.948 23:25:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57869 00:04:19.948 23:25:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:19.948 23:25:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57869 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57869 ']' 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57869 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57869 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:25.241 killing process with pid 57869 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57869' 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57869 00:04:25.241 23:25:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57869 00:04:26.185 23:25:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.185 23:25:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:26.185 00:04:26.185 real 0m8.803s 00:04:26.185 user 0m8.403s 00:04:26.185 sys 0m0.610s 00:04:26.185 23:25:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.185 ************************************ 00:04:26.185 END TEST skip_rpc_with_json 00:04:26.185 ************************************ 00:04:26.185 23:25:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.446 23:25:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:26.446 23:25:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.446 23:25:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.446 23:25:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.446 ************************************ 00:04:26.446 START TEST skip_rpc_with_delay 00:04:26.446 ************************************ 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:26.446 [2024-09-28 23:25:14.448118] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:26.446 [2024-09-28 23:25:14.448232] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:26.446 00:04:26.446 real 0m0.126s 00:04:26.446 user 0m0.064s 00:04:26.446 sys 0m0.061s 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.446 ************************************ 00:04:26.446 END TEST skip_rpc_with_delay 00:04:26.446 ************************************ 00:04:26.446 23:25:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:26.446 23:25:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:26.446 23:25:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:26.446 23:25:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:26.446 23:25:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.446 23:25:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.446 23:25:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.446 ************************************ 00:04:26.446 START TEST exit_on_failed_rpc_init 00:04:26.446 ************************************ 00:04:26.446 23:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:26.446 23:25:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57992 00:04:26.446 23:25:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57992 00:04:26.446 23:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57992 ']' 00:04:26.446 23:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.446 23:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.446 23:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.446 23:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.446 23:25:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.446 23:25:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.708 [2024-09-28 23:25:14.633004] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:26.708 [2024-09-28 23:25:14.633141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57992 ] 00:04:26.708 [2024-09-28 23:25:14.784012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.968 [2024-09-28 23:25:14.939325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.540 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:27.540 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:27.540 23:25:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.540 23:25:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:27.541 23:25:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.541 [2024-09-28 23:25:15.543350] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:27.541 [2024-09-28 23:25:15.543472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58010 ] 00:04:27.541 [2024-09-28 23:25:15.693221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.113 [2024-09-28 23:25:15.976793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.113 [2024-09-28 23:25:15.976879] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:28.113 [2024-09-28 23:25:15.976892] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:28.113 [2024-09-28 23:25:15.976902] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:28.113 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:28.113 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:28.113 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:28.113 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:28.113 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:28.113 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:28.113 23:25:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:28.113 23:25:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57992 00:04:28.113 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57992 ']' 00:04:28.113 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57992 00:04:28.114 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:28.114 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.114 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57992 00:04:28.374 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.374 killing process with pid 57992 00:04:28.374 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.374 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57992' 00:04:28.374 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57992 00:04:28.374 23:25:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57992 00:04:29.761 00:04:29.761 real 0m2.996s 00:04:29.761 user 0m3.568s 00:04:29.761 sys 0m0.428s 00:04:29.761 23:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.761 ************************************ 00:04:29.761 END TEST exit_on_failed_rpc_init 00:04:29.761 ************************************ 00:04:29.761 23:25:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.761 23:25:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:29.761 00:04:29.761 real 0m18.515s 00:04:29.761 user 0m18.068s 00:04:29.761 sys 0m1.530s 00:04:29.761 23:25:17 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.761 23:25:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.761 ************************************ 00:04:29.761 END TEST skip_rpc 00:04:29.761 ************************************ 00:04:29.761 23:25:17 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.761 23:25:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.761 23:25:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.761 23:25:17 -- common/autotest_common.sh@10 -- # set +x 00:04:29.761 ************************************ 00:04:29.761 START TEST rpc_client 00:04:29.761 ************************************ 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.761 * Looking for test storage... 00:04:29.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.761 23:25:17 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:29.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.761 --rc genhtml_branch_coverage=1 00:04:29.761 --rc genhtml_function_coverage=1 00:04:29.761 --rc genhtml_legend=1 00:04:29.761 --rc geninfo_all_blocks=1 00:04:29.761 --rc geninfo_unexecuted_blocks=1 00:04:29.761 00:04:29.761 ' 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:29.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.761 --rc genhtml_branch_coverage=1 00:04:29.761 --rc genhtml_function_coverage=1 00:04:29.761 --rc genhtml_legend=1 00:04:29.761 --rc geninfo_all_blocks=1 00:04:29.761 --rc geninfo_unexecuted_blocks=1 00:04:29.761 00:04:29.761 ' 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:29.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.761 --rc genhtml_branch_coverage=1 00:04:29.761 --rc genhtml_function_coverage=1 00:04:29.761 --rc genhtml_legend=1 00:04:29.761 --rc geninfo_all_blocks=1 00:04:29.761 --rc geninfo_unexecuted_blocks=1 00:04:29.761 00:04:29.761 ' 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:29.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.761 --rc genhtml_branch_coverage=1 00:04:29.761 --rc genhtml_function_coverage=1 00:04:29.761 --rc genhtml_legend=1 00:04:29.761 --rc geninfo_all_blocks=1 00:04:29.761 --rc geninfo_unexecuted_blocks=1 00:04:29.761 00:04:29.761 ' 00:04:29.761 23:25:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:29.761 OK 00:04:29.761 23:25:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:29.761 00:04:29.761 real 0m0.205s 00:04:29.761 user 0m0.104s 00:04:29.761 sys 0m0.101s 00:04:29.761 ************************************ 00:04:29.761 END TEST rpc_client 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.761 23:25:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:29.761 ************************************ 00:04:29.762 23:25:17 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:29.762 23:25:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.762 23:25:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.762 23:25:17 -- common/autotest_common.sh@10 -- # set +x 00:04:29.762 ************************************ 00:04:29.762 START TEST json_config 00:04:29.762 ************************************ 00:04:29.762 23:25:17 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:30.023 23:25:17 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:30.023 23:25:17 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:30.023 23:25:17 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:30.023 23:25:18 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:30.023 23:25:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.023 23:25:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.023 23:25:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.023 23:25:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.023 23:25:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.023 23:25:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.023 23:25:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.023 23:25:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.023 23:25:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.024 23:25:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.024 23:25:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.024 23:25:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:30.024 23:25:18 json_config -- scripts/common.sh@345 -- # : 1 00:04:30.024 23:25:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.024 23:25:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.024 23:25:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:30.024 23:25:18 json_config -- scripts/common.sh@353 -- # local d=1 00:04:30.024 23:25:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.024 23:25:18 json_config -- scripts/common.sh@355 -- # echo 1 00:04:30.024 23:25:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.024 23:25:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:30.024 23:25:18 json_config -- scripts/common.sh@353 -- # local d=2 00:04:30.024 23:25:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.024 23:25:18 json_config -- scripts/common.sh@355 -- # echo 2 00:04:30.024 23:25:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.024 23:25:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.024 23:25:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.024 23:25:18 json_config -- scripts/common.sh@368 -- # return 0 00:04:30.024 23:25:18 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.024 23:25:18 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:30.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.024 --rc genhtml_branch_coverage=1 00:04:30.024 --rc genhtml_function_coverage=1 00:04:30.024 --rc genhtml_legend=1 00:04:30.024 --rc geninfo_all_blocks=1 00:04:30.024 --rc geninfo_unexecuted_blocks=1 00:04:30.024 00:04:30.024 ' 00:04:30.024 23:25:18 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:30.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.024 --rc genhtml_branch_coverage=1 00:04:30.024 --rc genhtml_function_coverage=1 00:04:30.024 --rc genhtml_legend=1 00:04:30.024 --rc geninfo_all_blocks=1 00:04:30.024 --rc geninfo_unexecuted_blocks=1 00:04:30.024 00:04:30.024 ' 00:04:30.024 23:25:18 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:30.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.024 --rc genhtml_branch_coverage=1 00:04:30.024 --rc genhtml_function_coverage=1 00:04:30.024 --rc genhtml_legend=1 00:04:30.024 --rc geninfo_all_blocks=1 00:04:30.024 --rc geninfo_unexecuted_blocks=1 00:04:30.024 00:04:30.024 ' 00:04:30.024 23:25:18 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:30.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.024 --rc genhtml_branch_coverage=1 00:04:30.024 --rc genhtml_function_coverage=1 00:04:30.024 --rc genhtml_legend=1 00:04:30.024 --rc geninfo_all_blocks=1 00:04:30.024 --rc geninfo_unexecuted_blocks=1 00:04:30.024 00:04:30.024 ' 00:04:30.024 23:25:18 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5ac6952c-8883-403a-8f1d-45bf473106db 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5ac6952c-8883-403a-8f1d-45bf473106db 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:30.024 23:25:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.024 23:25:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.024 23:25:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.024 23:25:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.024 23:25:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.024 23:25:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.024 23:25:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.024 23:25:18 json_config -- paths/export.sh@5 -- # export PATH 00:04:30.024 23:25:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@51 -- # : 0 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.024 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.024 23:25:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.024 23:25:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:30.024 23:25:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:30.024 23:25:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:30.024 23:25:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:30.024 23:25:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:30.024 WARNING: No tests are enabled so not running JSON configuration tests 00:04:30.024 23:25:18 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:30.024 23:25:18 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:30.024 00:04:30.024 real 0m0.147s 00:04:30.024 user 0m0.098s 00:04:30.024 sys 0m0.051s 00:04:30.024 23:25:18 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.024 23:25:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.024 ************************************ 00:04:30.024 END TEST json_config 00:04:30.024 ************************************ 00:04:30.024 23:25:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:30.024 23:25:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.024 23:25:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.024 23:25:18 -- common/autotest_common.sh@10 -- # set +x 00:04:30.024 ************************************ 00:04:30.024 START TEST json_config_extra_key 00:04:30.024 ************************************ 00:04:30.024 23:25:18 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:30.024 23:25:18 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:30.024 23:25:18 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:30.024 23:25:18 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:30.286 23:25:18 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.286 23:25:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:30.286 23:25:18 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.286 23:25:18 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:30.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.286 --rc genhtml_branch_coverage=1 00:04:30.286 --rc genhtml_function_coverage=1 00:04:30.286 --rc genhtml_legend=1 00:04:30.286 --rc geninfo_all_blocks=1 00:04:30.286 --rc geninfo_unexecuted_blocks=1 00:04:30.286 00:04:30.286 ' 00:04:30.286 23:25:18 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:30.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.286 --rc genhtml_branch_coverage=1 00:04:30.286 --rc genhtml_function_coverage=1 00:04:30.286 --rc genhtml_legend=1 00:04:30.286 --rc geninfo_all_blocks=1 00:04:30.286 --rc geninfo_unexecuted_blocks=1 00:04:30.286 00:04:30.286 ' 00:04:30.286 23:25:18 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:30.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.286 --rc genhtml_branch_coverage=1 00:04:30.286 --rc genhtml_function_coverage=1 00:04:30.286 --rc genhtml_legend=1 00:04:30.286 --rc geninfo_all_blocks=1 00:04:30.286 --rc geninfo_unexecuted_blocks=1 00:04:30.286 00:04:30.286 ' 00:04:30.286 23:25:18 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:30.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.286 --rc genhtml_branch_coverage=1 00:04:30.286 --rc genhtml_function_coverage=1 00:04:30.286 --rc genhtml_legend=1 00:04:30.286 --rc geninfo_all_blocks=1 00:04:30.286 --rc geninfo_unexecuted_blocks=1 00:04:30.286 00:04:30.286 ' 00:04:30.286 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5ac6952c-8883-403a-8f1d-45bf473106db 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5ac6952c-8883-403a-8f1d-45bf473106db 00:04:30.286 23:25:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:30.287 23:25:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.287 23:25:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.287 23:25:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.287 23:25:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.287 23:25:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.287 23:25:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.287 23:25:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.287 23:25:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:30.287 23:25:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.287 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.287 23:25:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.287 INFO: launching applications... 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:30.287 23:25:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.287 Waiting for target to run... 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58203 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58203 /var/tmp/spdk_tgt.sock 00:04:30.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.287 23:25:18 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58203 ']' 00:04:30.287 23:25:18 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.287 23:25:18 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.287 23:25:18 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.287 23:25:18 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.287 23:25:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:30.287 23:25:18 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:30.287 [2024-09-28 23:25:18.343031] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:30.287 [2024-09-28 23:25:18.343599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58203 ] 00:04:30.859 [2024-09-28 23:25:18.737925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.859 [2024-09-28 23:25:18.929410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.431 23:25:19 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.431 00:04:31.431 23:25:19 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:31.431 23:25:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:31.431 INFO: shutting down applications... 00:04:31.431 23:25:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:31.431 23:25:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:31.431 23:25:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:31.431 23:25:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.431 23:25:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58203 ]] 00:04:31.431 23:25:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58203 00:04:31.431 23:25:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.431 23:25:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.431 23:25:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58203 00:04:31.431 23:25:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.003 23:25:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.003 23:25:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.003 23:25:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58203 00:04:32.003 23:25:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.297 23:25:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.297 23:25:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.297 23:25:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58203 00:04:32.297 23:25:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.890 23:25:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.890 23:25:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.890 23:25:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58203 00:04:32.890 23:25:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.463 23:25:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.463 23:25:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.463 23:25:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58203 00:04:33.463 23:25:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.463 23:25:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:33.463 SPDK target shutdown done 00:04:33.463 23:25:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.463 23:25:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.463 Success 00:04:33.463 23:25:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:33.463 00:04:33.463 real 0m3.345s 00:04:33.463 user 0m2.891s 00:04:33.463 sys 0m0.496s 00:04:33.463 23:25:21 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.463 23:25:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.463 ************************************ 00:04:33.463 END TEST json_config_extra_key 00:04:33.463 ************************************ 00:04:33.463 23:25:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.463 23:25:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.463 23:25:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.463 23:25:21 -- common/autotest_common.sh@10 -- # set +x 00:04:33.463 ************************************ 00:04:33.463 START TEST alias_rpc 00:04:33.463 ************************************ 00:04:33.463 23:25:21 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.463 * Looking for test storage... 00:04:33.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:33.463 23:25:21 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:33.463 23:25:21 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:33.463 23:25:21 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:33.463 23:25:21 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.463 23:25:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:33.724 23:25:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:33.724 23:25:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.724 23:25:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:33.724 23:25:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.724 23:25:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.724 23:25:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.724 23:25:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:33.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.724 --rc genhtml_branch_coverage=1 00:04:33.724 --rc genhtml_function_coverage=1 00:04:33.724 --rc genhtml_legend=1 00:04:33.724 --rc geninfo_all_blocks=1 00:04:33.724 --rc geninfo_unexecuted_blocks=1 00:04:33.724 00:04:33.724 ' 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:33.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.724 --rc genhtml_branch_coverage=1 00:04:33.724 --rc genhtml_function_coverage=1 00:04:33.724 --rc genhtml_legend=1 00:04:33.724 --rc geninfo_all_blocks=1 00:04:33.724 --rc geninfo_unexecuted_blocks=1 00:04:33.724 00:04:33.724 ' 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:33.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.724 --rc genhtml_branch_coverage=1 00:04:33.724 --rc genhtml_function_coverage=1 00:04:33.724 --rc genhtml_legend=1 00:04:33.724 --rc geninfo_all_blocks=1 00:04:33.724 --rc geninfo_unexecuted_blocks=1 00:04:33.724 00:04:33.724 ' 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:33.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.724 --rc genhtml_branch_coverage=1 00:04:33.724 --rc genhtml_function_coverage=1 00:04:33.724 --rc genhtml_legend=1 00:04:33.724 --rc geninfo_all_blocks=1 00:04:33.724 --rc geninfo_unexecuted_blocks=1 00:04:33.724 00:04:33.724 ' 00:04:33.724 23:25:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:33.724 23:25:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58302 00:04:33.724 23:25:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58302 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58302 ']' 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.724 23:25:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.724 23:25:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.724 [2024-09-28 23:25:21.710331] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:33.724 [2024-09-28 23:25:21.710444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58302 ] 00:04:33.724 [2024-09-28 23:25:21.861289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.985 [2024-09-28 23:25:22.089305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.554 23:25:22 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.554 23:25:22 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:34.554 23:25:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:34.815 23:25:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58302 00:04:34.815 23:25:22 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58302 ']' 00:04:34.815 23:25:22 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58302 00:04:34.815 23:25:22 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:34.815 23:25:22 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:34.815 23:25:22 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58302 00:04:34.815 23:25:22 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:34.815 killing process with pid 58302 00:04:34.815 23:25:22 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:34.815 23:25:22 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58302' 00:04:34.815 23:25:22 alias_rpc -- common/autotest_common.sh@969 -- # kill 58302 00:04:34.815 23:25:22 alias_rpc -- common/autotest_common.sh@974 -- # wait 58302 00:04:36.726 00:04:36.726 real 0m2.957s 00:04:36.726 user 0m3.053s 00:04:36.726 sys 0m0.414s 00:04:36.726 23:25:24 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.726 23:25:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.726 ************************************ 00:04:36.726 END TEST alias_rpc 00:04:36.726 ************************************ 00:04:36.726 23:25:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:36.726 23:25:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.726 23:25:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.726 23:25:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.726 23:25:24 -- common/autotest_common.sh@10 -- # set +x 00:04:36.726 ************************************ 00:04:36.726 START TEST spdkcli_tcp 00:04:36.726 ************************************ 00:04:36.726 23:25:24 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.726 * Looking for test storage... 00:04:36.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:36.726 23:25:24 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:36.726 23:25:24 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:36.726 23:25:24 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:36.726 23:25:24 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.726 23:25:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.727 23:25:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.727 --rc genhtml_branch_coverage=1 00:04:36.727 --rc genhtml_function_coverage=1 00:04:36.727 --rc genhtml_legend=1 00:04:36.727 --rc geninfo_all_blocks=1 00:04:36.727 --rc geninfo_unexecuted_blocks=1 00:04:36.727 00:04:36.727 ' 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.727 --rc genhtml_branch_coverage=1 00:04:36.727 --rc genhtml_function_coverage=1 00:04:36.727 --rc genhtml_legend=1 00:04:36.727 --rc geninfo_all_blocks=1 00:04:36.727 --rc geninfo_unexecuted_blocks=1 00:04:36.727 00:04:36.727 ' 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.727 --rc genhtml_branch_coverage=1 00:04:36.727 --rc genhtml_function_coverage=1 00:04:36.727 --rc genhtml_legend=1 00:04:36.727 --rc geninfo_all_blocks=1 00:04:36.727 --rc geninfo_unexecuted_blocks=1 00:04:36.727 00:04:36.727 ' 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.727 --rc genhtml_branch_coverage=1 00:04:36.727 --rc genhtml_function_coverage=1 00:04:36.727 --rc genhtml_legend=1 00:04:36.727 --rc geninfo_all_blocks=1 00:04:36.727 --rc geninfo_unexecuted_blocks=1 00:04:36.727 00:04:36.727 ' 00:04:36.727 23:25:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:36.727 23:25:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:36.727 23:25:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:36.727 23:25:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:36.727 23:25:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:36.727 23:25:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:36.727 23:25:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.727 23:25:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58398 00:04:36.727 23:25:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58398 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58398 ']' 00:04:36.727 23:25:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.727 23:25:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.727 [2024-09-28 23:25:24.712207] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:36.727 [2024-09-28 23:25:24.712317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58398 ] 00:04:36.727 [2024-09-28 23:25:24.854700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.987 [2024-09-28 23:25:25.033614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.987 [2024-09-28 23:25:25.033746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.564 23:25:25 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.564 23:25:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:37.564 23:25:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58415 00:04:37.564 23:25:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:37.564 23:25:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:37.831 [ 00:04:37.831 "bdev_malloc_delete", 00:04:37.831 "bdev_malloc_create", 00:04:37.831 "bdev_null_resize", 00:04:37.831 "bdev_null_delete", 00:04:37.831 "bdev_null_create", 00:04:37.831 "bdev_nvme_cuse_unregister", 00:04:37.831 "bdev_nvme_cuse_register", 00:04:37.831 "bdev_opal_new_user", 00:04:37.831 "bdev_opal_set_lock_state", 00:04:37.831 "bdev_opal_delete", 00:04:37.831 "bdev_opal_get_info", 00:04:37.831 "bdev_opal_create", 00:04:37.831 "bdev_nvme_opal_revert", 00:04:37.831 "bdev_nvme_opal_init", 00:04:37.831 "bdev_nvme_send_cmd", 00:04:37.831 "bdev_nvme_set_keys", 00:04:37.831 "bdev_nvme_get_path_iostat", 00:04:37.831 "bdev_nvme_get_mdns_discovery_info", 00:04:37.831 "bdev_nvme_stop_mdns_discovery", 00:04:37.831 "bdev_nvme_start_mdns_discovery", 00:04:37.831 "bdev_nvme_set_multipath_policy", 00:04:37.831 "bdev_nvme_set_preferred_path", 00:04:37.831 "bdev_nvme_get_io_paths", 00:04:37.831 "bdev_nvme_remove_error_injection", 00:04:37.831 "bdev_nvme_add_error_injection", 00:04:37.831 "bdev_nvme_get_discovery_info", 00:04:37.831 "bdev_nvme_stop_discovery", 00:04:37.831 "bdev_nvme_start_discovery", 00:04:37.831 "bdev_nvme_get_controller_health_info", 00:04:37.831 "bdev_nvme_disable_controller", 00:04:37.831 "bdev_nvme_enable_controller", 00:04:37.831 "bdev_nvme_reset_controller", 00:04:37.831 "bdev_nvme_get_transport_statistics", 00:04:37.831 "bdev_nvme_apply_firmware", 00:04:37.831 "bdev_nvme_detach_controller", 00:04:37.831 "bdev_nvme_get_controllers", 00:04:37.831 "bdev_nvme_attach_controller", 00:04:37.831 "bdev_nvme_set_hotplug", 00:04:37.831 "bdev_nvme_set_options", 00:04:37.831 "bdev_passthru_delete", 00:04:37.831 "bdev_passthru_create", 00:04:37.831 "bdev_lvol_set_parent_bdev", 00:04:37.831 "bdev_lvol_set_parent", 00:04:37.831 "bdev_lvol_check_shallow_copy", 00:04:37.831 "bdev_lvol_start_shallow_copy", 00:04:37.831 "bdev_lvol_grow_lvstore", 00:04:37.831 "bdev_lvol_get_lvols", 00:04:37.831 "bdev_lvol_get_lvstores", 00:04:37.831 "bdev_lvol_delete", 00:04:37.831 "bdev_lvol_set_read_only", 00:04:37.831 "bdev_lvol_resize", 00:04:37.831 "bdev_lvol_decouple_parent", 00:04:37.831 "bdev_lvol_inflate", 00:04:37.831 "bdev_lvol_rename", 00:04:37.831 "bdev_lvol_clone_bdev", 00:04:37.831 "bdev_lvol_clone", 00:04:37.831 "bdev_lvol_snapshot", 00:04:37.831 "bdev_lvol_create", 00:04:37.831 "bdev_lvol_delete_lvstore", 00:04:37.831 "bdev_lvol_rename_lvstore", 00:04:37.831 "bdev_lvol_create_lvstore", 00:04:37.831 "bdev_raid_set_options", 00:04:37.831 "bdev_raid_remove_base_bdev", 00:04:37.831 "bdev_raid_add_base_bdev", 00:04:37.831 "bdev_raid_delete", 00:04:37.831 "bdev_raid_create", 00:04:37.831 "bdev_raid_get_bdevs", 00:04:37.831 "bdev_error_inject_error", 00:04:37.831 "bdev_error_delete", 00:04:37.831 "bdev_error_create", 00:04:37.831 "bdev_split_delete", 00:04:37.831 "bdev_split_create", 00:04:37.831 "bdev_delay_delete", 00:04:37.831 "bdev_delay_create", 00:04:37.831 "bdev_delay_update_latency", 00:04:37.831 "bdev_zone_block_delete", 00:04:37.831 "bdev_zone_block_create", 00:04:37.831 "blobfs_create", 00:04:37.831 "blobfs_detect", 00:04:37.831 "blobfs_set_cache_size", 00:04:37.831 "bdev_xnvme_delete", 00:04:37.831 "bdev_xnvme_create", 00:04:37.831 "bdev_aio_delete", 00:04:37.831 "bdev_aio_rescan", 00:04:37.831 "bdev_aio_create", 00:04:37.831 "bdev_ftl_set_property", 00:04:37.831 "bdev_ftl_get_properties", 00:04:37.831 "bdev_ftl_get_stats", 00:04:37.831 "bdev_ftl_unmap", 00:04:37.831 "bdev_ftl_unload", 00:04:37.831 "bdev_ftl_delete", 00:04:37.831 "bdev_ftl_load", 00:04:37.831 "bdev_ftl_create", 00:04:37.831 "bdev_virtio_attach_controller", 00:04:37.831 "bdev_virtio_scsi_get_devices", 00:04:37.831 "bdev_virtio_detach_controller", 00:04:37.831 "bdev_virtio_blk_set_hotplug", 00:04:37.831 "bdev_iscsi_delete", 00:04:37.831 "bdev_iscsi_create", 00:04:37.831 "bdev_iscsi_set_options", 00:04:37.831 "accel_error_inject_error", 00:04:37.831 "ioat_scan_accel_module", 00:04:37.831 "dsa_scan_accel_module", 00:04:37.831 "iaa_scan_accel_module", 00:04:37.831 "keyring_file_remove_key", 00:04:37.831 "keyring_file_add_key", 00:04:37.831 "keyring_linux_set_options", 00:04:37.831 "fsdev_aio_delete", 00:04:37.831 "fsdev_aio_create", 00:04:37.831 "iscsi_get_histogram", 00:04:37.831 "iscsi_enable_histogram", 00:04:37.831 "iscsi_set_options", 00:04:37.831 "iscsi_get_auth_groups", 00:04:37.831 "iscsi_auth_group_remove_secret", 00:04:37.831 "iscsi_auth_group_add_secret", 00:04:37.831 "iscsi_delete_auth_group", 00:04:37.831 "iscsi_create_auth_group", 00:04:37.831 "iscsi_set_discovery_auth", 00:04:37.831 "iscsi_get_options", 00:04:37.831 "iscsi_target_node_request_logout", 00:04:37.831 "iscsi_target_node_set_redirect", 00:04:37.831 "iscsi_target_node_set_auth", 00:04:37.831 "iscsi_target_node_add_lun", 00:04:37.831 "iscsi_get_stats", 00:04:37.831 "iscsi_get_connections", 00:04:37.831 "iscsi_portal_group_set_auth", 00:04:37.831 "iscsi_start_portal_group", 00:04:37.831 "iscsi_delete_portal_group", 00:04:37.831 "iscsi_create_portal_group", 00:04:37.831 "iscsi_get_portal_groups", 00:04:37.831 "iscsi_delete_target_node", 00:04:37.831 "iscsi_target_node_remove_pg_ig_maps", 00:04:37.831 "iscsi_target_node_add_pg_ig_maps", 00:04:37.831 "iscsi_create_target_node", 00:04:37.831 "iscsi_get_target_nodes", 00:04:37.831 "iscsi_delete_initiator_group", 00:04:37.831 "iscsi_initiator_group_remove_initiators", 00:04:37.831 "iscsi_initiator_group_add_initiators", 00:04:37.831 "iscsi_create_initiator_group", 00:04:37.831 "iscsi_get_initiator_groups", 00:04:37.831 "nvmf_set_crdt", 00:04:37.831 "nvmf_set_config", 00:04:37.831 "nvmf_set_max_subsystems", 00:04:37.831 "nvmf_stop_mdns_prr", 00:04:37.831 "nvmf_publish_mdns_prr", 00:04:37.831 "nvmf_subsystem_get_listeners", 00:04:37.831 "nvmf_subsystem_get_qpairs", 00:04:37.831 "nvmf_subsystem_get_controllers", 00:04:37.831 "nvmf_get_stats", 00:04:37.831 "nvmf_get_transports", 00:04:37.831 "nvmf_create_transport", 00:04:37.831 "nvmf_get_targets", 00:04:37.831 "nvmf_delete_target", 00:04:37.831 "nvmf_create_target", 00:04:37.831 "nvmf_subsystem_allow_any_host", 00:04:37.831 "nvmf_subsystem_set_keys", 00:04:37.831 "nvmf_subsystem_remove_host", 00:04:37.831 "nvmf_subsystem_add_host", 00:04:37.831 "nvmf_ns_remove_host", 00:04:37.831 "nvmf_ns_add_host", 00:04:37.831 "nvmf_subsystem_remove_ns", 00:04:37.831 "nvmf_subsystem_set_ns_ana_group", 00:04:37.831 "nvmf_subsystem_add_ns", 00:04:37.831 "nvmf_subsystem_listener_set_ana_state", 00:04:37.831 "nvmf_discovery_get_referrals", 00:04:37.831 "nvmf_discovery_remove_referral", 00:04:37.831 "nvmf_discovery_add_referral", 00:04:37.831 "nvmf_subsystem_remove_listener", 00:04:37.831 "nvmf_subsystem_add_listener", 00:04:37.831 "nvmf_delete_subsystem", 00:04:37.831 "nvmf_create_subsystem", 00:04:37.831 "nvmf_get_subsystems", 00:04:37.831 "env_dpdk_get_mem_stats", 00:04:37.831 "nbd_get_disks", 00:04:37.831 "nbd_stop_disk", 00:04:37.831 "nbd_start_disk", 00:04:37.831 "ublk_recover_disk", 00:04:37.831 "ublk_get_disks", 00:04:37.831 "ublk_stop_disk", 00:04:37.831 "ublk_start_disk", 00:04:37.831 "ublk_destroy_target", 00:04:37.831 "ublk_create_target", 00:04:37.831 "virtio_blk_create_transport", 00:04:37.831 "virtio_blk_get_transports", 00:04:37.831 "vhost_controller_set_coalescing", 00:04:37.832 "vhost_get_controllers", 00:04:37.832 "vhost_delete_controller", 00:04:37.832 "vhost_create_blk_controller", 00:04:37.832 "vhost_scsi_controller_remove_target", 00:04:37.832 "vhost_scsi_controller_add_target", 00:04:37.832 "vhost_start_scsi_controller", 00:04:37.832 "vhost_create_scsi_controller", 00:04:37.832 "thread_set_cpumask", 00:04:37.832 "scheduler_set_options", 00:04:37.832 "framework_get_governor", 00:04:37.832 "framework_get_scheduler", 00:04:37.832 "framework_set_scheduler", 00:04:37.832 "framework_get_reactors", 00:04:37.832 "thread_get_io_channels", 00:04:37.832 "thread_get_pollers", 00:04:37.832 "thread_get_stats", 00:04:37.832 "framework_monitor_context_switch", 00:04:37.832 "spdk_kill_instance", 00:04:37.832 "log_enable_timestamps", 00:04:37.832 "log_get_flags", 00:04:37.832 "log_clear_flag", 00:04:37.832 "log_set_flag", 00:04:37.832 "log_get_level", 00:04:37.832 "log_set_level", 00:04:37.832 "log_get_print_level", 00:04:37.832 "log_set_print_level", 00:04:37.832 "framework_enable_cpumask_locks", 00:04:37.832 "framework_disable_cpumask_locks", 00:04:37.832 "framework_wait_init", 00:04:37.832 "framework_start_init", 00:04:37.832 "scsi_get_devices", 00:04:37.832 "bdev_get_histogram", 00:04:37.832 "bdev_enable_histogram", 00:04:37.832 "bdev_set_qos_limit", 00:04:37.832 "bdev_set_qd_sampling_period", 00:04:37.832 "bdev_get_bdevs", 00:04:37.832 "bdev_reset_iostat", 00:04:37.832 "bdev_get_iostat", 00:04:37.832 "bdev_examine", 00:04:37.832 "bdev_wait_for_examine", 00:04:37.832 "bdev_set_options", 00:04:37.832 "accel_get_stats", 00:04:37.832 "accel_set_options", 00:04:37.832 "accel_set_driver", 00:04:37.832 "accel_crypto_key_destroy", 00:04:37.832 "accel_crypto_keys_get", 00:04:37.832 "accel_crypto_key_create", 00:04:37.832 "accel_assign_opc", 00:04:37.832 "accel_get_module_info", 00:04:37.832 "accel_get_opc_assignments", 00:04:37.832 "vmd_rescan", 00:04:37.832 "vmd_remove_device", 00:04:37.832 "vmd_enable", 00:04:37.832 "sock_get_default_impl", 00:04:37.832 "sock_set_default_impl", 00:04:37.832 "sock_impl_set_options", 00:04:37.832 "sock_impl_get_options", 00:04:37.832 "iobuf_get_stats", 00:04:37.832 "iobuf_set_options", 00:04:37.832 "keyring_get_keys", 00:04:37.832 "framework_get_pci_devices", 00:04:37.832 "framework_get_config", 00:04:37.832 "framework_get_subsystems", 00:04:37.832 "fsdev_set_opts", 00:04:37.832 "fsdev_get_opts", 00:04:37.832 "trace_get_info", 00:04:37.832 "trace_get_tpoint_group_mask", 00:04:37.832 "trace_disable_tpoint_group", 00:04:37.832 "trace_enable_tpoint_group", 00:04:37.832 "trace_clear_tpoint_mask", 00:04:37.832 "trace_set_tpoint_mask", 00:04:37.832 "notify_get_notifications", 00:04:37.832 "notify_get_types", 00:04:37.832 "spdk_get_version", 00:04:37.832 "rpc_get_methods" 00:04:37.832 ] 00:04:37.832 23:25:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.832 23:25:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:37.832 23:25:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58398 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58398 ']' 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58398 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58398 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.832 killing process with pid 58398 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58398' 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58398 00:04:37.832 23:25:25 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58398 00:04:39.748 00:04:39.748 real 0m2.890s 00:04:39.748 user 0m5.109s 00:04:39.748 sys 0m0.419s 00:04:39.748 23:25:27 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.748 23:25:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.748 ************************************ 00:04:39.748 END TEST spdkcli_tcp 00:04:39.748 ************************************ 00:04:39.748 23:25:27 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.748 23:25:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.748 23:25:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.748 23:25:27 -- common/autotest_common.sh@10 -- # set +x 00:04:39.748 ************************************ 00:04:39.748 START TEST dpdk_mem_utility 00:04:39.748 ************************************ 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.748 * Looking for test storage... 00:04:39.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.748 23:25:27 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:39.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.748 --rc genhtml_branch_coverage=1 00:04:39.748 --rc genhtml_function_coverage=1 00:04:39.748 --rc genhtml_legend=1 00:04:39.748 --rc geninfo_all_blocks=1 00:04:39.748 --rc geninfo_unexecuted_blocks=1 00:04:39.748 00:04:39.748 ' 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:39.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.748 --rc genhtml_branch_coverage=1 00:04:39.748 --rc genhtml_function_coverage=1 00:04:39.748 --rc genhtml_legend=1 00:04:39.748 --rc geninfo_all_blocks=1 00:04:39.748 --rc geninfo_unexecuted_blocks=1 00:04:39.748 00:04:39.748 ' 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:39.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.748 --rc genhtml_branch_coverage=1 00:04:39.748 --rc genhtml_function_coverage=1 00:04:39.748 --rc genhtml_legend=1 00:04:39.748 --rc geninfo_all_blocks=1 00:04:39.748 --rc geninfo_unexecuted_blocks=1 00:04:39.748 00:04:39.748 ' 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:39.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.748 --rc genhtml_branch_coverage=1 00:04:39.748 --rc genhtml_function_coverage=1 00:04:39.748 --rc genhtml_legend=1 00:04:39.748 --rc geninfo_all_blocks=1 00:04:39.748 --rc geninfo_unexecuted_blocks=1 00:04:39.748 00:04:39.748 ' 00:04:39.748 23:25:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:39.748 23:25:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58503 00:04:39.748 23:25:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58503 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58503 ']' 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.748 23:25:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.748 23:25:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.748 [2024-09-28 23:25:27.648609] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:39.748 [2024-09-28 23:25:27.648715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58503 ] 00:04:39.748 [2024-09-28 23:25:27.795840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.010 [2024-09-28 23:25:27.974628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.583 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.583 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:40.583 23:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:40.583 23:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:40.583 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.583 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.583 { 00:04:40.583 "filename": "/tmp/spdk_mem_dump.txt" 00:04:40.583 } 00:04:40.583 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.583 23:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:40.583 DPDK memory size 866.000000 MiB in 1 heap(s) 00:04:40.583 1 heaps totaling size 866.000000 MiB 00:04:40.583 size: 866.000000 MiB heap id: 0 00:04:40.583 end heaps---------- 00:04:40.583 9 mempools totaling size 642.649841 MiB 00:04:40.583 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:40.583 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:40.583 size: 92.545471 MiB name: bdev_io_58503 00:04:40.583 size: 51.011292 MiB name: evtpool_58503 00:04:40.583 size: 50.003479 MiB name: msgpool_58503 00:04:40.583 size: 36.509338 MiB name: fsdev_io_58503 00:04:40.583 size: 21.763794 MiB name: PDU_Pool 00:04:40.583 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:40.583 size: 0.026123 MiB name: Session_Pool 00:04:40.583 end mempools------- 00:04:40.583 6 memzones totaling size 4.142822 MiB 00:04:40.583 size: 1.000366 MiB name: RG_ring_0_58503 00:04:40.583 size: 1.000366 MiB name: RG_ring_1_58503 00:04:40.583 size: 1.000366 MiB name: RG_ring_4_58503 00:04:40.583 size: 1.000366 MiB name: RG_ring_5_58503 00:04:40.583 size: 0.125366 MiB name: RG_ring_2_58503 00:04:40.583 size: 0.015991 MiB name: RG_ring_3_58503 00:04:40.583 end memzones------- 00:04:40.583 23:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:40.583 heap id: 0 total size: 866.000000 MiB number of busy elements: 311 number of free elements: 19 00:04:40.583 list of free elements. size: 19.914551 MiB 00:04:40.583 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:40.583 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:40.583 element at address: 0x200009600000 with size: 1.995972 MiB 00:04:40.583 element at address: 0x20000d800000 with size: 1.995972 MiB 00:04:40.583 element at address: 0x200007000000 with size: 1.991028 MiB 00:04:40.583 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:04:40.583 element at address: 0x20001c300040 with size: 0.999939 MiB 00:04:40.583 element at address: 0x20001c400000 with size: 0.999084 MiB 00:04:40.583 element at address: 0x200035000000 with size: 0.994324 MiB 00:04:40.583 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:04:40.583 element at address: 0x20001c700040 with size: 0.936401 MiB 00:04:40.584 element at address: 0x200000200000 with size: 0.832153 MiB 00:04:40.584 element at address: 0x20001de00000 with size: 0.561218 MiB 00:04:40.584 element at address: 0x200003e00000 with size: 0.491394 MiB 00:04:40.584 element at address: 0x20001c000000 with size: 0.488708 MiB 00:04:40.584 element at address: 0x20001c800000 with size: 0.485413 MiB 00:04:40.584 element at address: 0x200015e00000 with size: 0.443726 MiB 00:04:40.584 element at address: 0x20002b200000 with size: 0.390442 MiB 00:04:40.584 element at address: 0x200003a00000 with size: 0.352844 MiB 00:04:40.584 list of standard malloc elements. size: 199.286743 MiB 00:04:40.584 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:04:40.584 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:04:40.584 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:04:40.584 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:04:40.584 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:04:40.584 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:40.584 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:04:40.584 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:40.584 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:04:40.584 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:04:40.584 element at address: 0x200015dff040 with size: 0.000305 MiB 00:04:40.584 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003aff700 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dff180 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dff280 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dff380 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dff480 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dff580 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dff680 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dff780 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dff880 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dff980 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015e71980 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015e72080 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015e72180 with size: 0.000244 MiB 00:04:40.584 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de8fac0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de8fbc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de8fcc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de8fdc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20002b264040 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:04:40.584 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:04:40.585 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:04:40.585 list of memzone associated elements. size: 646.798706 MiB 00:04:40.585 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:04:40.585 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:40.585 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:04:40.585 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:40.585 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:04:40.585 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58503_0 00:04:40.585 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:40.585 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58503_0 00:04:40.585 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:40.585 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58503_0 00:04:40.585 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:04:40.585 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58503_0 00:04:40.585 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:04:40.585 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:40.585 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:04:40.585 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:40.585 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:40.585 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58503 00:04:40.585 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:40.585 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58503 00:04:40.585 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:40.585 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58503 00:04:40.585 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:04:40.585 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:40.585 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:04:40.585 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:40.585 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:04:40.585 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:40.585 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:04:40.585 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:40.585 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:40.585 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58503 00:04:40.585 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:40.585 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58503 00:04:40.585 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:04:40.585 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58503 00:04:40.585 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:04:40.585 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58503 00:04:40.585 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:04:40.585 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58503 00:04:40.585 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:04:40.585 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58503 00:04:40.585 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:04:40.585 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:40.585 element at address: 0x200015e72280 with size: 0.500549 MiB 00:04:40.585 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:40.585 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:04:40.585 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:40.585 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:04:40.585 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58503 00:04:40.585 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:04:40.585 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:40.585 element at address: 0x20002b264140 with size: 0.023804 MiB 00:04:40.585 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:40.585 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:04:40.585 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58503 00:04:40.585 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:04:40.585 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:40.585 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:04:40.585 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58503 00:04:40.585 element at address: 0x200003aff800 with size: 0.000366 MiB 00:04:40.585 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58503 00:04:40.585 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:04:40.585 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58503 00:04:40.585 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:04:40.585 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:40.585 23:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:40.585 23:25:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58503 00:04:40.585 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58503 ']' 00:04:40.585 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58503 00:04:40.585 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:40.585 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.585 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58503 00:04:40.585 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.585 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.585 killing process with pid 58503 00:04:40.585 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58503' 00:04:40.585 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58503 00:04:40.585 23:25:28 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58503 00:04:42.493 00:04:42.493 real 0m2.878s 00:04:42.493 user 0m2.895s 00:04:42.493 sys 0m0.392s 00:04:42.493 23:25:30 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.493 23:25:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.493 ************************************ 00:04:42.493 END TEST dpdk_mem_utility 00:04:42.493 ************************************ 00:04:42.493 23:25:30 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:42.493 23:25:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.493 23:25:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.493 23:25:30 -- common/autotest_common.sh@10 -- # set +x 00:04:42.493 ************************************ 00:04:42.493 START TEST event 00:04:42.493 ************************************ 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:42.493 * Looking for test storage... 00:04:42.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:42.493 23:25:30 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.493 23:25:30 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.493 23:25:30 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.493 23:25:30 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.493 23:25:30 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.493 23:25:30 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.493 23:25:30 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.493 23:25:30 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.493 23:25:30 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.493 23:25:30 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.493 23:25:30 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.493 23:25:30 event -- scripts/common.sh@344 -- # case "$op" in 00:04:42.493 23:25:30 event -- scripts/common.sh@345 -- # : 1 00:04:42.493 23:25:30 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.493 23:25:30 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.493 23:25:30 event -- scripts/common.sh@365 -- # decimal 1 00:04:42.493 23:25:30 event -- scripts/common.sh@353 -- # local d=1 00:04:42.493 23:25:30 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.493 23:25:30 event -- scripts/common.sh@355 -- # echo 1 00:04:42.493 23:25:30 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.493 23:25:30 event -- scripts/common.sh@366 -- # decimal 2 00:04:42.493 23:25:30 event -- scripts/common.sh@353 -- # local d=2 00:04:42.493 23:25:30 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.493 23:25:30 event -- scripts/common.sh@355 -- # echo 2 00:04:42.493 23:25:30 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.493 23:25:30 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.493 23:25:30 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.493 23:25:30 event -- scripts/common.sh@368 -- # return 0 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.493 --rc genhtml_branch_coverage=1 00:04:42.493 --rc genhtml_function_coverage=1 00:04:42.493 --rc genhtml_legend=1 00:04:42.493 --rc geninfo_all_blocks=1 00:04:42.493 --rc geninfo_unexecuted_blocks=1 00:04:42.493 00:04:42.493 ' 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.493 --rc genhtml_branch_coverage=1 00:04:42.493 --rc genhtml_function_coverage=1 00:04:42.493 --rc genhtml_legend=1 00:04:42.493 --rc geninfo_all_blocks=1 00:04:42.493 --rc geninfo_unexecuted_blocks=1 00:04:42.493 00:04:42.493 ' 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.493 --rc genhtml_branch_coverage=1 00:04:42.493 --rc genhtml_function_coverage=1 00:04:42.493 --rc genhtml_legend=1 00:04:42.493 --rc geninfo_all_blocks=1 00:04:42.493 --rc geninfo_unexecuted_blocks=1 00:04:42.493 00:04:42.493 ' 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.493 --rc genhtml_branch_coverage=1 00:04:42.493 --rc genhtml_function_coverage=1 00:04:42.493 --rc genhtml_legend=1 00:04:42.493 --rc geninfo_all_blocks=1 00:04:42.493 --rc geninfo_unexecuted_blocks=1 00:04:42.493 00:04:42.493 ' 00:04:42.493 23:25:30 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:42.493 23:25:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:42.493 23:25:30 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:42.493 23:25:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.493 23:25:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.493 ************************************ 00:04:42.493 START TEST event_perf 00:04:42.493 ************************************ 00:04:42.493 23:25:30 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.493 Running I/O for 1 seconds...[2024-09-28 23:25:30.547902] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:42.493 [2024-09-28 23:25:30.548005] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58600 ] 00:04:42.755 [2024-09-28 23:25:30.696448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.755 [2024-09-28 23:25:30.907170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.755 Running I/O for 1 seconds...[2024-09-28 23:25:30.908116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.755 [2024-09-28 23:25:30.908199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.755 [2024-09-28 23:25:30.908209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.138 00:04:44.138 lcore 0: 160388 00:04:44.138 lcore 1: 160388 00:04:44.138 lcore 2: 160388 00:04:44.138 lcore 3: 160389 00:04:44.138 done. 00:04:44.138 00:04:44.138 real 0m1.660s 00:04:44.138 user 0m4.441s 00:04:44.138 sys 0m0.095s 00:04:44.138 23:25:32 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.138 23:25:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.138 ************************************ 00:04:44.138 END TEST event_perf 00:04:44.138 ************************************ 00:04:44.139 23:25:32 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.139 23:25:32 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:44.139 23:25:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.139 23:25:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.139 ************************************ 00:04:44.139 START TEST event_reactor 00:04:44.139 ************************************ 00:04:44.139 23:25:32 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.139 [2024-09-28 23:25:32.258417] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:44.139 [2024-09-28 23:25:32.258535] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58640 ] 00:04:44.400 [2024-09-28 23:25:32.408427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.661 [2024-09-28 23:25:32.586070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.047 test_start 00:04:46.047 oneshot 00:04:46.047 tick 100 00:04:46.047 tick 100 00:04:46.047 tick 250 00:04:46.047 tick 100 00:04:46.047 tick 100 00:04:46.047 tick 250 00:04:46.047 tick 100 00:04:46.047 tick 500 00:04:46.047 tick 100 00:04:46.047 tick 100 00:04:46.047 tick 250 00:04:46.047 tick 100 00:04:46.047 tick 100 00:04:46.047 test_end 00:04:46.047 00:04:46.047 real 0m1.619s 00:04:46.047 user 0m1.430s 00:04:46.047 sys 0m0.080s 00:04:46.047 23:25:33 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.047 23:25:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:46.047 ************************************ 00:04:46.047 END TEST event_reactor 00:04:46.047 ************************************ 00:04:46.047 23:25:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.047 23:25:33 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:46.047 23:25:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.047 23:25:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.047 ************************************ 00:04:46.047 START TEST event_reactor_perf 00:04:46.047 ************************************ 00:04:46.047 23:25:33 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.047 [2024-09-28 23:25:33.922059] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:46.047 [2024-09-28 23:25:33.922587] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58682 ] 00:04:46.047 [2024-09-28 23:25:34.072320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.308 [2024-09-28 23:25:34.252066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.300 test_start 00:04:47.300 test_end 00:04:47.300 Performance: 322199 events per second 00:04:47.300 00:04:47.300 real 0m1.568s 00:04:47.300 user 0m1.382s 00:04:47.300 sys 0m0.077s 00:04:47.300 23:25:35 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.300 23:25:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.300 ************************************ 00:04:47.300 END TEST event_reactor_perf 00:04:47.300 ************************************ 00:04:47.561 23:25:35 event -- event/event.sh@49 -- # uname -s 00:04:47.561 23:25:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.561 23:25:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.561 23:25:35 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.561 23:25:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.561 23:25:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.561 ************************************ 00:04:47.561 START TEST event_scheduler 00:04:47.561 ************************************ 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.561 * Looking for test storage... 00:04:47.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.561 23:25:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:47.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.561 --rc genhtml_branch_coverage=1 00:04:47.561 --rc genhtml_function_coverage=1 00:04:47.561 --rc genhtml_legend=1 00:04:47.561 --rc geninfo_all_blocks=1 00:04:47.561 --rc geninfo_unexecuted_blocks=1 00:04:47.561 00:04:47.561 ' 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:47.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.561 --rc genhtml_branch_coverage=1 00:04:47.561 --rc genhtml_function_coverage=1 00:04:47.561 --rc genhtml_legend=1 00:04:47.561 --rc geninfo_all_blocks=1 00:04:47.561 --rc geninfo_unexecuted_blocks=1 00:04:47.561 00:04:47.561 ' 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:47.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.561 --rc genhtml_branch_coverage=1 00:04:47.561 --rc genhtml_function_coverage=1 00:04:47.561 --rc genhtml_legend=1 00:04:47.561 --rc geninfo_all_blocks=1 00:04:47.561 --rc geninfo_unexecuted_blocks=1 00:04:47.561 00:04:47.561 ' 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:47.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.561 --rc genhtml_branch_coverage=1 00:04:47.561 --rc genhtml_function_coverage=1 00:04:47.561 --rc genhtml_legend=1 00:04:47.561 --rc geninfo_all_blocks=1 00:04:47.561 --rc geninfo_unexecuted_blocks=1 00:04:47.561 00:04:47.561 ' 00:04:47.561 23:25:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.561 23:25:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58752 00:04:47.561 23:25:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.561 23:25:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58752 00:04:47.561 23:25:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58752 ']' 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.561 23:25:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.561 [2024-09-28 23:25:35.710792] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:47.561 [2024-09-28 23:25:35.711049] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58752 ] 00:04:47.822 [2024-09-28 23:25:35.857790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.082 [2024-09-28 23:25:36.037635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.082 [2024-09-28 23:25:36.037856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.082 [2024-09-28 23:25:36.038153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.082 [2024-09-28 23:25:36.038174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.342 23:25:36 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.342 23:25:36 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:48.342 23:25:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:48.601 23:25:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.601 23:25:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.601 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.601 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.601 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.601 POWER: Cannot set governor of lcore 0 to performance 00:04:48.601 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.601 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.601 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.601 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.601 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:48.601 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:48.601 POWER: Unable to set Power Management Environment for lcore 0 00:04:48.601 [2024-09-28 23:25:36.511755] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:48.601 [2024-09-28 23:25:36.511822] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:48.602 [2024-09-28 23:25:36.511844] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:48.602 [2024-09-28 23:25:36.511911] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:48.602 [2024-09-28 23:25:36.511938] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:48.602 [2024-09-28 23:25:36.512001] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:48.602 23:25:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.602 23:25:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:48.602 23:25:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.602 23:25:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.602 [2024-09-28 23:25:36.731106] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.602 23:25:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.602 23:25:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.602 23:25:36 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.602 23:25:36 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.602 23:25:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.602 ************************************ 00:04:48.602 START TEST scheduler_create_thread 00:04:48.602 ************************************ 00:04:48.602 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:48.602 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.602 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.602 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.602 2 00:04:48.602 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.602 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.602 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.602 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.602 3 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.862 4 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.862 5 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.862 6 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.862 7 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.862 8 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.862 9 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.862 10 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.862 23:25:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.800 23:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.800 23:25:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:49.800 23:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.800 23:25:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.173 23:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.173 23:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:51.173 23:25:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:51.173 23:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.173 23:25:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.106 ************************************ 00:04:52.106 END TEST scheduler_create_thread 00:04:52.106 ************************************ 00:04:52.106 23:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.106 00:04:52.106 real 0m3.376s 00:04:52.106 user 0m0.018s 00:04:52.106 sys 0m0.002s 00:04:52.106 23:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.106 23:25:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.106 23:25:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:52.106 23:25:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58752 00:04:52.106 23:25:40 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58752 ']' 00:04:52.106 23:25:40 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58752 00:04:52.106 23:25:40 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:52.106 23:25:40 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.106 23:25:40 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58752 00:04:52.106 killing process with pid 58752 00:04:52.106 23:25:40 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:52.106 23:25:40 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:52.106 23:25:40 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58752' 00:04:52.106 23:25:40 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58752 00:04:52.106 23:25:40 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58752 00:04:52.364 [2024-09-28 23:25:40.501943] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:53.300 00:04:53.300 real 0m5.698s 00:04:53.300 user 0m11.167s 00:04:53.300 sys 0m0.339s 00:04:53.300 23:25:41 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.300 23:25:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.300 ************************************ 00:04:53.300 END TEST event_scheduler 00:04:53.300 ************************************ 00:04:53.300 23:25:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:53.301 23:25:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:53.301 23:25:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.301 23:25:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.301 23:25:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.301 ************************************ 00:04:53.301 START TEST app_repeat 00:04:53.301 ************************************ 00:04:53.301 23:25:41 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:53.301 Process app_repeat pid: 58864 00:04:53.301 spdk_app_start Round 0 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58864 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58864' 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:53.301 23:25:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58864 /var/tmp/spdk-nbd.sock 00:04:53.301 23:25:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58864 ']' 00:04:53.301 23:25:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.301 23:25:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.301 23:25:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.301 23:25:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.301 23:25:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.301 [2024-09-28 23:25:41.302412] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:53.301 [2024-09-28 23:25:41.302535] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58864 ] 00:04:53.301 [2024-09-28 23:25:41.448327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.559 [2024-09-28 23:25:41.616945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.559 [2024-09-28 23:25:41.617035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.134 23:25:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.134 23:25:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:54.134 23:25:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.392 Malloc0 00:04:54.392 23:25:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.652 Malloc1 00:04:54.652 23:25:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.652 23:25:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.911 /dev/nbd0 00:04:54.911 23:25:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.911 23:25:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.911 1+0 records in 00:04:54.911 1+0 records out 00:04:54.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207728 s, 19.7 MB/s 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:54.911 23:25:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:54.911 23:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.911 23:25:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.911 23:25:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.170 /dev/nbd1 00:04:55.170 23:25:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.170 23:25:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.170 1+0 records in 00:04:55.170 1+0 records out 00:04:55.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023496 s, 17.4 MB/s 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:55.170 23:25:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:55.170 23:25:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.170 23:25:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.170 23:25:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.170 23:25:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.170 23:25:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.428 { 00:04:55.428 "nbd_device": "/dev/nbd0", 00:04:55.428 "bdev_name": "Malloc0" 00:04:55.428 }, 00:04:55.428 { 00:04:55.428 "nbd_device": "/dev/nbd1", 00:04:55.428 "bdev_name": "Malloc1" 00:04:55.428 } 00:04:55.428 ]' 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.428 { 00:04:55.428 "nbd_device": "/dev/nbd0", 00:04:55.428 "bdev_name": "Malloc0" 00:04:55.428 }, 00:04:55.428 { 00:04:55.428 "nbd_device": "/dev/nbd1", 00:04:55.428 "bdev_name": "Malloc1" 00:04:55.428 } 00:04:55.428 ]' 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.428 /dev/nbd1' 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.428 /dev/nbd1' 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.428 256+0 records in 00:04:55.428 256+0 records out 00:04:55.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00609201 s, 172 MB/s 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.428 256+0 records in 00:04:55.428 256+0 records out 00:04:55.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180781 s, 58.0 MB/s 00:04:55.428 23:25:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.429 256+0 records in 00:04:55.429 256+0 records out 00:04:55.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196687 s, 53.3 MB/s 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.429 23:25:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.687 23:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.687 23:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.687 23:25:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.687 23:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.687 23:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.687 23:25:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.687 23:25:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.687 23:25:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.687 23:25:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.687 23:25:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.945 23:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.945 23:25:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.945 23:25:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.945 23:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.945 23:25:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.945 23:25:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.946 23:25:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.946 23:25:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.946 23:25:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.946 23:25:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.946 23:25:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.204 23:25:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.204 23:25:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.462 23:25:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.028 [2024-09-28 23:25:45.139452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.287 [2024-09-28 23:25:45.295311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.287 [2024-09-28 23:25:45.295416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.287 [2024-09-28 23:25:45.403889] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.287 [2024-09-28 23:25:45.403951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.817 spdk_app_start Round 1 00:04:59.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.817 23:25:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.817 23:25:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:59.817 23:25:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58864 /var/tmp/spdk-nbd.sock 00:04:59.817 23:25:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58864 ']' 00:04:59.817 23:25:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.817 23:25:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.817 23:25:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.817 23:25:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.817 23:25:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.817 23:25:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.817 23:25:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:59.817 23:25:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.817 Malloc0 00:04:59.817 23:25:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.076 Malloc1 00:05:00.076 23:25:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.076 23:25:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.335 /dev/nbd0 00:05:00.335 23:25:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.335 23:25:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.335 1+0 records in 00:05:00.335 1+0 records out 00:05:00.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548867 s, 7.5 MB/s 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:00.335 23:25:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:00.335 23:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.335 23:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.335 23:25:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.593 /dev/nbd1 00:05:00.593 23:25:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.593 23:25:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.593 23:25:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:00.593 23:25:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:00.593 23:25:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:00.593 23:25:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:00.593 23:25:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:00.593 23:25:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:00.594 23:25:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:00.594 23:25:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:00.594 23:25:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.594 1+0 records in 00:05:00.594 1+0 records out 00:05:00.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212938 s, 19.2 MB/s 00:05:00.594 23:25:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.594 23:25:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:00.594 23:25:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.594 23:25:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:00.594 23:25:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:00.594 23:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.594 23:25:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.594 23:25:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.594 23:25:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.594 23:25:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.853 { 00:05:00.853 "nbd_device": "/dev/nbd0", 00:05:00.853 "bdev_name": "Malloc0" 00:05:00.853 }, 00:05:00.853 { 00:05:00.853 "nbd_device": "/dev/nbd1", 00:05:00.853 "bdev_name": "Malloc1" 00:05:00.853 } 00:05:00.853 ]' 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.853 { 00:05:00.853 "nbd_device": "/dev/nbd0", 00:05:00.853 "bdev_name": "Malloc0" 00:05:00.853 }, 00:05:00.853 { 00:05:00.853 "nbd_device": "/dev/nbd1", 00:05:00.853 "bdev_name": "Malloc1" 00:05:00.853 } 00:05:00.853 ]' 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.853 /dev/nbd1' 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.853 /dev/nbd1' 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.853 256+0 records in 00:05:00.853 256+0 records out 00:05:00.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419473 s, 250 MB/s 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.853 256+0 records in 00:05:00.853 256+0 records out 00:05:00.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175355 s, 59.8 MB/s 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.853 256+0 records in 00:05:00.853 256+0 records out 00:05:00.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208523 s, 50.3 MB/s 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.853 23:25:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.111 23:25:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.111 23:25:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.111 23:25:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.111 23:25:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.111 23:25:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.111 23:25:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.111 23:25:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.111 23:25:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.111 23:25:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.111 23:25:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.369 23:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.628 23:25:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.628 23:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.628 23:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.628 23:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.628 23:25:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.628 23:25:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.628 23:25:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.628 23:25:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.628 23:25:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.628 23:25:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.886 23:25:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.456 [2024-09-28 23:25:50.529482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.717 [2024-09-28 23:25:50.695060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.717 [2024-09-28 23:25:50.695208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.717 [2024-09-28 23:25:50.814999] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.717 [2024-09-28 23:25:50.815240] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.263 spdk_app_start Round 2 00:05:05.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.263 23:25:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.264 23:25:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:05.264 23:25:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58864 /var/tmp/spdk-nbd.sock 00:05:05.264 23:25:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58864 ']' 00:05:05.264 23:25:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.264 23:25:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.264 23:25:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.264 23:25:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.264 23:25:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.264 23:25:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.264 23:25:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:05.264 23:25:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.264 Malloc0 00:05:05.264 23:25:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.525 Malloc1 00:05:05.525 23:25:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.525 23:25:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.787 /dev/nbd0 00:05:05.787 23:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.787 23:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.787 1+0 records in 00:05:05.787 1+0 records out 00:05:05.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185952 s, 22.0 MB/s 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:05.787 23:25:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:05.787 23:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.787 23:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.787 23:25:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.787 /dev/nbd1 00:05:06.048 23:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.048 23:25:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.048 23:25:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:06.048 23:25:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:06.048 23:25:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:06.048 23:25:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:06.048 23:25:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:06.048 23:25:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:06.048 23:25:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:06.048 23:25:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:06.049 23:25:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.049 1+0 records in 00:05:06.049 1+0 records out 00:05:06.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228724 s, 17.9 MB/s 00:05:06.049 23:25:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.049 23:25:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:06.049 23:25:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.049 23:25:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:06.049 23:25:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:06.049 23:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.049 23:25:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.049 23:25:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.049 23:25:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.049 23:25:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.049 23:25:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.049 { 00:05:06.049 "nbd_device": "/dev/nbd0", 00:05:06.049 "bdev_name": "Malloc0" 00:05:06.049 }, 00:05:06.049 { 00:05:06.049 "nbd_device": "/dev/nbd1", 00:05:06.049 "bdev_name": "Malloc1" 00:05:06.049 } 00:05:06.049 ]' 00:05:06.049 23:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.049 { 00:05:06.049 "nbd_device": "/dev/nbd0", 00:05:06.049 "bdev_name": "Malloc0" 00:05:06.049 }, 00:05:06.049 { 00:05:06.049 "nbd_device": "/dev/nbd1", 00:05:06.049 "bdev_name": "Malloc1" 00:05:06.049 } 00:05:06.049 ]' 00:05:06.049 23:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.049 23:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.049 /dev/nbd1' 00:05:06.049 23:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.049 /dev/nbd1' 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.310 256+0 records in 00:05:06.310 256+0 records out 00:05:06.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633695 s, 165 MB/s 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.310 256+0 records in 00:05:06.310 256+0 records out 00:05:06.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012688 s, 82.6 MB/s 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.310 256+0 records in 00:05:06.310 256+0 records out 00:05:06.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158555 s, 66.1 MB/s 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.310 23:25:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.572 23:25:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.833 23:25:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.833 23:25:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.094 23:25:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.666 [2024-09-28 23:25:55.831148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.927 [2024-09-28 23:25:55.964392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.927 [2024-09-28 23:25:55.964394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.927 [2024-09-28 23:25:56.059946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.927 [2024-09-28 23:25:56.059995] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.479 23:25:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58864 /var/tmp/spdk-nbd.sock 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58864 ']' 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:10.479 23:25:58 event.app_repeat -- event/event.sh@39 -- # killprocess 58864 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58864 ']' 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58864 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58864 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.479 killing process with pid 58864 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58864' 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58864 00:05:10.479 23:25:58 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58864 00:05:11.052 spdk_app_start is called in Round 0. 00:05:11.052 Shutdown signal received, stop current app iteration 00:05:11.052 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:11.052 spdk_app_start is called in Round 1. 00:05:11.052 Shutdown signal received, stop current app iteration 00:05:11.052 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:11.052 spdk_app_start is called in Round 2. 00:05:11.052 Shutdown signal received, stop current app iteration 00:05:11.052 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:11.052 spdk_app_start is called in Round 3. 00:05:11.052 Shutdown signal received, stop current app iteration 00:05:11.052 ************************************ 00:05:11.052 END TEST app_repeat 00:05:11.052 ************************************ 00:05:11.052 23:25:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.052 23:25:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.052 00:05:11.052 real 0m17.770s 00:05:11.052 user 0m38.202s 00:05:11.052 sys 0m2.202s 00:05:11.052 23:25:59 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.052 23:25:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.052 23:25:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.052 23:25:59 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:11.052 23:25:59 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.052 23:25:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.052 23:25:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.052 ************************************ 00:05:11.052 START TEST cpu_locks 00:05:11.052 ************************************ 00:05:11.052 23:25:59 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:11.052 * Looking for test storage... 00:05:11.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:11.052 23:25:59 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:11.052 23:25:59 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:11.052 23:25:59 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:11.052 23:25:59 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.052 23:25:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.313 23:25:59 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:11.313 23:25:59 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.313 23:25:59 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:11.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.313 --rc genhtml_branch_coverage=1 00:05:11.313 --rc genhtml_function_coverage=1 00:05:11.313 --rc genhtml_legend=1 00:05:11.313 --rc geninfo_all_blocks=1 00:05:11.313 --rc geninfo_unexecuted_blocks=1 00:05:11.313 00:05:11.313 ' 00:05:11.313 23:25:59 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:11.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.313 --rc genhtml_branch_coverage=1 00:05:11.313 --rc genhtml_function_coverage=1 00:05:11.313 --rc genhtml_legend=1 00:05:11.313 --rc geninfo_all_blocks=1 00:05:11.313 --rc geninfo_unexecuted_blocks=1 00:05:11.313 00:05:11.313 ' 00:05:11.313 23:25:59 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:11.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.313 --rc genhtml_branch_coverage=1 00:05:11.313 --rc genhtml_function_coverage=1 00:05:11.313 --rc genhtml_legend=1 00:05:11.313 --rc geninfo_all_blocks=1 00:05:11.313 --rc geninfo_unexecuted_blocks=1 00:05:11.313 00:05:11.313 ' 00:05:11.313 23:25:59 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:11.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.313 --rc genhtml_branch_coverage=1 00:05:11.313 --rc genhtml_function_coverage=1 00:05:11.313 --rc genhtml_legend=1 00:05:11.313 --rc geninfo_all_blocks=1 00:05:11.313 --rc geninfo_unexecuted_blocks=1 00:05:11.313 00:05:11.313 ' 00:05:11.313 23:25:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.313 23:25:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.313 23:25:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.313 23:25:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.313 23:25:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.313 23:25:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.313 23:25:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.313 ************************************ 00:05:11.313 START TEST default_locks 00:05:11.313 ************************************ 00:05:11.313 23:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:11.313 23:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59294 00:05:11.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.313 23:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59294 00:05:11.313 23:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59294 ']' 00:05:11.313 23:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.313 23:25:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.314 23:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.314 23:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.314 23:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.314 23:25:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.314 [2024-09-28 23:25:59.312624] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:11.314 [2024-09-28 23:25:59.312741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59294 ] 00:05:11.314 [2024-09-28 23:25:59.462760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.574 [2024-09-28 23:25:59.614077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.143 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.143 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:12.143 23:26:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59294 00:05:12.143 23:26:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59294 00:05:12.143 23:26:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.402 23:26:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59294 00:05:12.402 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59294 ']' 00:05:12.402 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59294 00:05:12.402 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:12.403 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.403 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59294 00:05:12.403 killing process with pid 59294 00:05:12.403 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.403 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.403 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59294' 00:05:12.403 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59294 00:05:12.403 23:26:00 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59294 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59294 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59294 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:13.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.782 ERROR: process (pid: 59294) is no longer running 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59294 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59294 ']' 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.782 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59294) - No such process 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.782 00:05:13.782 real 0m2.458s 00:05:13.782 user 0m2.466s 00:05:13.782 sys 0m0.443s 00:05:13.782 ************************************ 00:05:13.782 END TEST default_locks 00:05:13.782 ************************************ 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.782 23:26:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.782 23:26:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:13.782 23:26:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.782 23:26:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.782 23:26:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.782 ************************************ 00:05:13.782 START TEST default_locks_via_rpc 00:05:13.782 ************************************ 00:05:13.782 23:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:13.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.782 23:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59353 00:05:13.782 23:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59353 00:05:13.782 23:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59353 ']' 00:05:13.782 23:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.782 23:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.782 23:26:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.782 23:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.782 23:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.782 23:26:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.782 [2024-09-28 23:26:01.827717] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:13.782 [2024-09-28 23:26:01.827838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59353 ] 00:05:14.042 [2024-09-28 23:26:01.978019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.042 [2024-09-28 23:26:02.125674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59353 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.612 23:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59353 00:05:14.873 23:26:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59353 00:05:14.873 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59353 ']' 00:05:14.873 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59353 00:05:14.873 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:14.873 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.873 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59353 00:05:14.873 killing process with pid 59353 00:05:14.873 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.873 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.873 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59353' 00:05:14.874 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59353 00:05:14.874 23:26:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59353 00:05:16.260 00:05:16.260 real 0m2.310s 00:05:16.260 user 0m2.299s 00:05:16.260 sys 0m0.403s 00:05:16.260 23:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.260 ************************************ 00:05:16.260 END TEST default_locks_via_rpc 00:05:16.260 ************************************ 00:05:16.260 23:26:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.260 23:26:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:16.260 23:26:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.260 23:26:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.260 23:26:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.260 ************************************ 00:05:16.260 START TEST non_locking_app_on_locked_coremask 00:05:16.260 ************************************ 00:05:16.260 23:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:16.260 23:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59405 00:05:16.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.260 23:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59405 /var/tmp/spdk.sock 00:05:16.260 23:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59405 ']' 00:05:16.260 23:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.260 23:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.260 23:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.260 23:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.260 23:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.260 23:26:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.260 [2024-09-28 23:26:04.186732] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:16.260 [2024-09-28 23:26:04.186851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59405 ] 00:05:16.260 [2024-09-28 23:26:04.334876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.522 [2024-09-28 23:26:04.480979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59421 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59421 /var/tmp/spdk2.sock 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59421 ']' 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.093 23:26:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.093 [2024-09-28 23:26:05.077275] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:17.093 [2024-09-28 23:26:05.077608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59421 ] 00:05:17.093 [2024-09-28 23:26:05.230756] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.093 [2024-09-28 23:26:05.230811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.666 [2024-09-28 23:26:05.526088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.608 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.609 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:18.609 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59405 00:05:18.609 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59405 00:05:18.609 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59405 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59405 ']' 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59405 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59405 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.870 killing process with pid 59405 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59405' 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59405 00:05:18.870 23:26:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59405 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59421 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59421 ']' 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59421 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59421 00:05:21.430 killing process with pid 59421 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59421' 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59421 00:05:21.430 23:26:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59421 00:05:22.814 ************************************ 00:05:22.814 END TEST non_locking_app_on_locked_coremask 00:05:22.814 ************************************ 00:05:22.814 00:05:22.814 real 0m6.487s 00:05:22.814 user 0m6.760s 00:05:22.814 sys 0m0.846s 00:05:22.814 23:26:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.815 23:26:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.815 23:26:10 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:22.815 23:26:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.815 23:26:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.815 23:26:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.815 ************************************ 00:05:22.815 START TEST locking_app_on_unlocked_coremask 00:05:22.815 ************************************ 00:05:22.815 23:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:22.815 23:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59523 00:05:22.815 23:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59523 /var/tmp/spdk.sock 00:05:22.815 23:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59523 ']' 00:05:22.815 23:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.815 23:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.815 23:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.815 23:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:22.815 23:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.815 23:26:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.815 [2024-09-28 23:26:10.715281] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:22.815 [2024-09-28 23:26:10.715403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59523 ] 00:05:22.815 [2024-09-28 23:26:10.863204] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.815 [2024-09-28 23:26:10.863403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.073 [2024-09-28 23:26:11.012197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59534 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59534 /var/tmp/spdk2.sock 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59534 ']' 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.640 23:26:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.640 [2024-09-28 23:26:11.626593] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:23.640 [2024-09-28 23:26:11.626707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59534 ] 00:05:23.640 [2024-09-28 23:26:11.774722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.210 [2024-09-28 23:26:12.076471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.153 23:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.153 23:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:25.153 23:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59534 00:05:25.153 23:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.153 23:26:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59534 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59523 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59523 ']' 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59523 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59523 00:05:25.414 killing process with pid 59523 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59523' 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59523 00:05:25.414 23:26:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59523 00:05:27.953 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59534 00:05:27.953 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59534 ']' 00:05:27.953 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59534 00:05:27.953 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:27.953 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.953 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59534 00:05:27.953 killing process with pid 59534 00:05:27.953 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.953 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.954 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59534' 00:05:27.954 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59534 00:05:27.954 23:26:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59534 00:05:29.329 00:05:29.329 real 0m6.580s 00:05:29.329 user 0m6.831s 00:05:29.329 sys 0m0.874s 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.329 ************************************ 00:05:29.329 END TEST locking_app_on_unlocked_coremask 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.329 ************************************ 00:05:29.329 23:26:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:29.329 23:26:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.329 23:26:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.329 23:26:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.329 ************************************ 00:05:29.329 START TEST locking_app_on_locked_coremask 00:05:29.329 ************************************ 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:29.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59630 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59630 /var/tmp/spdk.sock 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59630 ']' 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.329 23:26:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.329 [2024-09-28 23:26:17.337383] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:29.329 [2024-09-28 23:26:17.337505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59630 ] 00:05:29.329 [2024-09-28 23:26:17.480543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.588 [2024-09-28 23:26:17.626591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59646 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59646 /var/tmp/spdk2.sock 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59646 /var/tmp/spdk2.sock 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:30.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59646 /var/tmp/spdk2.sock 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59646 ']' 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.155 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.155 [2024-09-28 23:26:18.177258] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:30.155 [2024-09-28 23:26:18.177562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59646 ] 00:05:30.413 [2024-09-28 23:26:18.325523] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59630 has claimed it. 00:05:30.413 [2024-09-28 23:26:18.325571] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.985 ERROR: process (pid: 59646) is no longer running 00:05:30.985 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59646) - No such process 00:05:30.985 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.985 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:30.985 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:30.985 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.985 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.985 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.985 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59630 00:05:30.985 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59630 00:05:30.985 23:26:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59630 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59630 ']' 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59630 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59630 00:05:30.985 killing process with pid 59630 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59630' 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59630 00:05:30.985 23:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59630 00:05:32.364 ************************************ 00:05:32.364 END TEST locking_app_on_locked_coremask 00:05:32.364 ************************************ 00:05:32.364 00:05:32.364 real 0m3.136s 00:05:32.364 user 0m3.367s 00:05:32.364 sys 0m0.550s 00:05:32.364 23:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.364 23:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.364 23:26:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:32.364 23:26:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.364 23:26:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.364 23:26:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.364 ************************************ 00:05:32.364 START TEST locking_overlapped_coremask 00:05:32.364 ************************************ 00:05:32.364 23:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:32.364 23:26:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59699 00:05:32.364 23:26:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59699 /var/tmp/spdk.sock 00:05:32.364 23:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59699 ']' 00:05:32.364 23:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.364 23:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.364 23:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.364 23:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.364 23:26:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:32.364 23:26:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.364 [2024-09-28 23:26:20.513322] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:32.364 [2024-09-28 23:26:20.513442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:05:32.622 [2024-09-28 23:26:20.663455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.880 [2024-09-28 23:26:20.837293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.880 [2024-09-28 23:26:20.837539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.880 [2024-09-28 23:26:20.837543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59717 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59717 /var/tmp/spdk2.sock 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59717 /var/tmp/spdk2.sock 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:33.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59717 /var/tmp/spdk2.sock 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59717 ']' 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.447 23:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.447 [2024-09-28 23:26:21.447632] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:33.447 [2024-09-28 23:26:21.447750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59717 ] 00:05:33.447 [2024-09-28 23:26:21.602588] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59699 has claimed it. 00:05:33.447 [2024-09-28 23:26:21.602645] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.011 ERROR: process (pid: 59717) is no longer running 00:05:34.011 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59717) - No such process 00:05:34.011 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.011 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:34.011 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:34.011 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.011 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59699 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59699 ']' 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59699 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59699 00:05:34.012 killing process with pid 59699 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59699' 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59699 00:05:34.012 23:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59699 00:05:35.384 ************************************ 00:05:35.384 END TEST locking_overlapped_coremask 00:05:35.384 ************************************ 00:05:35.384 00:05:35.384 real 0m3.005s 00:05:35.384 user 0m7.819s 00:05:35.384 sys 0m0.460s 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.384 23:26:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:35.384 23:26:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.384 23:26:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.384 23:26:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.384 ************************************ 00:05:35.384 START TEST locking_overlapped_coremask_via_rpc 00:05:35.384 ************************************ 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59770 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59770 /var/tmp/spdk.sock 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59770 ']' 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.384 23:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:35.645 [2024-09-28 23:26:23.559202] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:35.645 [2024-09-28 23:26:23.559295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59770 ] 00:05:35.645 [2024-09-28 23:26:23.705606] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.645 [2024-09-28 23:26:23.705658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.903 [2024-09-28 23:26:23.918065] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.903 [2024-09-28 23:26:23.918289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.903 [2024-09-28 23:26:23.918348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59788 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59788 /var/tmp/spdk2.sock 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59788 ']' 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.470 23:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.727 [2024-09-28 23:26:24.638672] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:36.727 [2024-09-28 23:26:24.638963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59788 ] 00:05:36.727 [2024-09-28 23:26:24.786877] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.727 [2024-09-28 23:26:24.786912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.984 [2024-09-28 23:26:25.099380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.984 [2024-09-28 23:26:25.102633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.984 [2024-09-28 23:26:25.102659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.915 [2024-09-28 23:26:26.066671] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59770 has claimed it. 00:05:37.915 request: 00:05:37.915 { 00:05:37.915 "method": "framework_enable_cpumask_locks", 00:05:37.915 "req_id": 1 00:05:37.915 } 00:05:37.915 Got JSON-RPC error response 00:05:37.915 response: 00:05:37.915 { 00:05:37.915 "code": -32603, 00:05:37.915 "message": "Failed to claim CPU core: 2" 00:05:37.915 } 00:05:37.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59770 /var/tmp/spdk.sock 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59770 ']' 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.915 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.173 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.173 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:38.173 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59788 /var/tmp/spdk2.sock 00:05:38.173 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59788 ']' 00:05:38.173 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.173 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.173 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.173 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.173 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.431 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.431 ************************************ 00:05:38.431 END TEST locking_overlapped_coremask_via_rpc 00:05:38.431 ************************************ 00:05:38.431 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:38.431 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:38.431 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.431 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.431 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.431 00:05:38.431 real 0m2.977s 00:05:38.431 user 0m0.980s 00:05:38.431 sys 0m0.109s 00:05:38.431 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.431 23:26:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.431 23:26:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:38.431 23:26:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59770 ]] 00:05:38.431 23:26:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59770 00:05:38.431 23:26:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59770 ']' 00:05:38.431 23:26:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59770 00:05:38.431 23:26:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:38.431 23:26:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.431 23:26:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59770 00:05:38.431 killing process with pid 59770 00:05:38.431 23:26:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.431 23:26:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.431 23:26:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59770' 00:05:38.431 23:26:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59770 00:05:38.431 23:26:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59770 00:05:39.806 23:26:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59788 ]] 00:05:39.806 23:26:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59788 00:05:39.806 23:26:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59788 ']' 00:05:39.806 23:26:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59788 00:05:39.806 23:26:27 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:39.806 23:26:27 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.806 23:26:27 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59788 00:05:39.806 killing process with pid 59788 00:05:39.806 23:26:27 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:39.806 23:26:27 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:39.806 23:26:27 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59788' 00:05:39.806 23:26:27 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59788 00:05:39.806 23:26:27 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59788 00:05:41.203 23:26:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.203 Process with pid 59770 is not found 00:05:41.203 23:26:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:41.203 23:26:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59770 ]] 00:05:41.203 23:26:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59770 00:05:41.203 23:26:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59770 ']' 00:05:41.203 23:26:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59770 00:05:41.203 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59770) - No such process 00:05:41.203 23:26:29 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59770 is not found' 00:05:41.203 23:26:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59788 ]] 00:05:41.203 23:26:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59788 00:05:41.203 Process with pid 59788 is not found 00:05:41.203 23:26:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59788 ']' 00:05:41.203 23:26:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59788 00:05:41.203 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59788) - No such process 00:05:41.203 23:26:29 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59788 is not found' 00:05:41.203 23:26:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.203 ************************************ 00:05:41.203 END TEST cpu_locks 00:05:41.203 ************************************ 00:05:41.203 00:05:41.203 real 0m30.034s 00:05:41.203 user 0m50.769s 00:05:41.203 sys 0m4.535s 00:05:41.203 23:26:29 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.203 23:26:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.203 ************************************ 00:05:41.203 END TEST event 00:05:41.203 ************************************ 00:05:41.203 00:05:41.203 real 0m58.794s 00:05:41.203 user 1m47.570s 00:05:41.203 sys 0m7.549s 00:05:41.203 23:26:29 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.203 23:26:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.203 23:26:29 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:41.203 23:26:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.204 23:26:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.204 23:26:29 -- common/autotest_common.sh@10 -- # set +x 00:05:41.204 ************************************ 00:05:41.204 START TEST thread 00:05:41.204 ************************************ 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:41.204 * Looking for test storage... 00:05:41.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.204 23:26:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.204 23:26:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.204 23:26:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.204 23:26:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.204 23:26:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.204 23:26:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.204 23:26:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.204 23:26:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.204 23:26:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.204 23:26:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.204 23:26:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.204 23:26:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:41.204 23:26:29 thread -- scripts/common.sh@345 -- # : 1 00:05:41.204 23:26:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.204 23:26:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.204 23:26:29 thread -- scripts/common.sh@365 -- # decimal 1 00:05:41.204 23:26:29 thread -- scripts/common.sh@353 -- # local d=1 00:05:41.204 23:26:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.204 23:26:29 thread -- scripts/common.sh@355 -- # echo 1 00:05:41.204 23:26:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.204 23:26:29 thread -- scripts/common.sh@366 -- # decimal 2 00:05:41.204 23:26:29 thread -- scripts/common.sh@353 -- # local d=2 00:05:41.204 23:26:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.204 23:26:29 thread -- scripts/common.sh@355 -- # echo 2 00:05:41.204 23:26:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.204 23:26:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.204 23:26:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.204 23:26:29 thread -- scripts/common.sh@368 -- # return 0 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.204 --rc genhtml_branch_coverage=1 00:05:41.204 --rc genhtml_function_coverage=1 00:05:41.204 --rc genhtml_legend=1 00:05:41.204 --rc geninfo_all_blocks=1 00:05:41.204 --rc geninfo_unexecuted_blocks=1 00:05:41.204 00:05:41.204 ' 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.204 --rc genhtml_branch_coverage=1 00:05:41.204 --rc genhtml_function_coverage=1 00:05:41.204 --rc genhtml_legend=1 00:05:41.204 --rc geninfo_all_blocks=1 00:05:41.204 --rc geninfo_unexecuted_blocks=1 00:05:41.204 00:05:41.204 ' 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.204 --rc genhtml_branch_coverage=1 00:05:41.204 --rc genhtml_function_coverage=1 00:05:41.204 --rc genhtml_legend=1 00:05:41.204 --rc geninfo_all_blocks=1 00:05:41.204 --rc geninfo_unexecuted_blocks=1 00:05:41.204 00:05:41.204 ' 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.204 --rc genhtml_branch_coverage=1 00:05:41.204 --rc genhtml_function_coverage=1 00:05:41.204 --rc genhtml_legend=1 00:05:41.204 --rc geninfo_all_blocks=1 00:05:41.204 --rc geninfo_unexecuted_blocks=1 00:05:41.204 00:05:41.204 ' 00:05:41.204 23:26:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.204 23:26:29 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.204 ************************************ 00:05:41.204 START TEST thread_poller_perf 00:05:41.204 ************************************ 00:05:41.204 23:26:29 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.465 [2024-09-28 23:26:29.372815] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:41.465 [2024-09-28 23:26:29.372898] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59948 ] 00:05:41.465 [2024-09-28 23:26:29.517271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.739 [2024-09-28 23:26:29.719974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.739 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:43.117 ====================================== 00:05:43.117 busy:2609330032 (cyc) 00:05:43.117 total_run_count: 337000 00:05:43.117 tsc_hz: 2600000000 (cyc) 00:05:43.117 ====================================== 00:05:43.117 poller_cost: 7742 (cyc), 2977 (nsec) 00:05:43.117 00:05:43.117 real 0m1.603s 00:05:43.117 user 0m1.417s 00:05:43.117 sys 0m0.077s 00:05:43.117 ************************************ 00:05:43.117 END TEST thread_poller_perf 00:05:43.117 ************************************ 00:05:43.117 23:26:30 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.117 23:26:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.117 23:26:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:43.117 23:26:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:43.117 23:26:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.117 23:26:30 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.117 ************************************ 00:05:43.117 START TEST thread_poller_perf 00:05:43.117 ************************************ 00:05:43.117 23:26:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:43.117 [2024-09-28 23:26:31.022269] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:43.117 [2024-09-28 23:26:31.022487] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:05:43.117 [2024-09-28 23:26:31.159702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.376 [2024-09-28 23:26:31.328832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.376 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:44.754 ====================================== 00:05:44.754 busy:2602671790 (cyc) 00:05:44.754 total_run_count: 5028000 00:05:44.754 tsc_hz: 2600000000 (cyc) 00:05:44.754 ====================================== 00:05:44.754 poller_cost: 517 (cyc), 198 (nsec) 00:05:44.754 00:05:44.754 real 0m1.564s 00:05:44.754 user 0m1.382s 00:05:44.754 sys 0m0.075s 00:05:44.754 23:26:32 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.754 23:26:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.754 ************************************ 00:05:44.754 END TEST thread_poller_perf 00:05:44.754 ************************************ 00:05:44.754 23:26:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:44.754 00:05:44.754 real 0m3.398s 00:05:44.754 user 0m2.910s 00:05:44.754 sys 0m0.268s 00:05:44.754 ************************************ 00:05:44.754 END TEST thread 00:05:44.754 ************************************ 00:05:44.754 23:26:32 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.754 23:26:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.754 23:26:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:44.754 23:26:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:44.754 23:26:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.754 23:26:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.754 23:26:32 -- common/autotest_common.sh@10 -- # set +x 00:05:44.754 ************************************ 00:05:44.754 START TEST app_cmdline 00:05:44.754 ************************************ 00:05:44.754 23:26:32 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:44.754 * Looking for test storage... 00:05:44.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:44.754 23:26:32 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:44.754 23:26:32 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:44.754 23:26:32 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:44.754 23:26:32 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.754 23:26:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:44.755 23:26:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.755 23:26:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:44.755 23:26:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:44.755 23:26:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.755 23:26:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:44.755 23:26:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.755 23:26:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.755 23:26:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.755 23:26:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:44.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.755 --rc genhtml_branch_coverage=1 00:05:44.755 --rc genhtml_function_coverage=1 00:05:44.755 --rc genhtml_legend=1 00:05:44.755 --rc geninfo_all_blocks=1 00:05:44.755 --rc geninfo_unexecuted_blocks=1 00:05:44.755 00:05:44.755 ' 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:44.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.755 --rc genhtml_branch_coverage=1 00:05:44.755 --rc genhtml_function_coverage=1 00:05:44.755 --rc genhtml_legend=1 00:05:44.755 --rc geninfo_all_blocks=1 00:05:44.755 --rc geninfo_unexecuted_blocks=1 00:05:44.755 00:05:44.755 ' 00:05:44.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:44.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.755 --rc genhtml_branch_coverage=1 00:05:44.755 --rc genhtml_function_coverage=1 00:05:44.755 --rc genhtml_legend=1 00:05:44.755 --rc geninfo_all_blocks=1 00:05:44.755 --rc geninfo_unexecuted_blocks=1 00:05:44.755 00:05:44.755 ' 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:44.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.755 --rc genhtml_branch_coverage=1 00:05:44.755 --rc genhtml_function_coverage=1 00:05:44.755 --rc genhtml_legend=1 00:05:44.755 --rc geninfo_all_blocks=1 00:05:44.755 --rc geninfo_unexecuted_blocks=1 00:05:44.755 00:05:44.755 ' 00:05:44.755 23:26:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:44.755 23:26:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60068 00:05:44.755 23:26:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60068 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60068 ']' 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.755 23:26:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.755 23:26:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.755 [2024-09-28 23:26:32.855800] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:44.755 [2024-09-28 23:26:32.856034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60068 ] 00:05:45.017 [2024-09-28 23:26:33.000830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.278 [2024-09-28 23:26:33.194900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.852 23:26:33 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.852 23:26:33 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:45.852 23:26:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:46.114 { 00:05:46.114 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:05:46.114 "fields": { 00:05:46.114 "major": 25, 00:05:46.114 "minor": 1, 00:05:46.114 "patch": 0, 00:05:46.114 "suffix": "-pre", 00:05:46.114 "commit": "09cc66129" 00:05:46.114 } 00:05:46.114 } 00:05:46.114 23:26:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:46.114 23:26:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:46.114 23:26:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:46.114 23:26:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:46.114 23:26:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.114 23:26:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:46.114 23:26:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.114 23:26:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:46.114 23:26:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:46.114 23:26:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.114 23:26:34 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.115 23:26:34 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:46.115 23:26:34 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.376 request: 00:05:46.376 { 00:05:46.376 "method": "env_dpdk_get_mem_stats", 00:05:46.376 "req_id": 1 00:05:46.376 } 00:05:46.376 Got JSON-RPC error response 00:05:46.376 response: 00:05:46.376 { 00:05:46.376 "code": -32601, 00:05:46.376 "message": "Method not found" 00:05:46.376 } 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.376 23:26:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60068 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60068 ']' 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60068 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60068 00:05:46.376 killing process with pid 60068 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60068' 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@969 -- # kill 60068 00:05:46.376 23:26:34 app_cmdline -- common/autotest_common.sh@974 -- # wait 60068 00:05:48.286 00:05:48.286 real 0m3.487s 00:05:48.286 user 0m3.692s 00:05:48.286 sys 0m0.601s 00:05:48.286 ************************************ 00:05:48.286 END TEST app_cmdline 00:05:48.286 ************************************ 00:05:48.286 23:26:36 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.286 23:26:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.286 23:26:36 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:48.286 23:26:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.286 23:26:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.286 23:26:36 -- common/autotest_common.sh@10 -- # set +x 00:05:48.286 ************************************ 00:05:48.286 START TEST version 00:05:48.286 ************************************ 00:05:48.286 23:26:36 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:48.286 * Looking for test storage... 00:05:48.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:48.286 23:26:36 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:48.286 23:26:36 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:48.286 23:26:36 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:48.286 23:26:36 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:48.286 23:26:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.286 23:26:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.286 23:26:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.286 23:26:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.286 23:26:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.286 23:26:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.286 23:26:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.286 23:26:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.286 23:26:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.286 23:26:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.286 23:26:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.286 23:26:36 version -- scripts/common.sh@344 -- # case "$op" in 00:05:48.286 23:26:36 version -- scripts/common.sh@345 -- # : 1 00:05:48.286 23:26:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.286 23:26:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.286 23:26:36 version -- scripts/common.sh@365 -- # decimal 1 00:05:48.286 23:26:36 version -- scripts/common.sh@353 -- # local d=1 00:05:48.286 23:26:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.286 23:26:36 version -- scripts/common.sh@355 -- # echo 1 00:05:48.286 23:26:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.286 23:26:36 version -- scripts/common.sh@366 -- # decimal 2 00:05:48.286 23:26:36 version -- scripts/common.sh@353 -- # local d=2 00:05:48.286 23:26:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.286 23:26:36 version -- scripts/common.sh@355 -- # echo 2 00:05:48.286 23:26:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.286 23:26:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.286 23:26:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.286 23:26:36 version -- scripts/common.sh@368 -- # return 0 00:05:48.286 23:26:36 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.286 23:26:36 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:48.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.286 --rc genhtml_branch_coverage=1 00:05:48.286 --rc genhtml_function_coverage=1 00:05:48.286 --rc genhtml_legend=1 00:05:48.286 --rc geninfo_all_blocks=1 00:05:48.286 --rc geninfo_unexecuted_blocks=1 00:05:48.286 00:05:48.286 ' 00:05:48.286 23:26:36 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:48.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.286 --rc genhtml_branch_coverage=1 00:05:48.286 --rc genhtml_function_coverage=1 00:05:48.286 --rc genhtml_legend=1 00:05:48.286 --rc geninfo_all_blocks=1 00:05:48.286 --rc geninfo_unexecuted_blocks=1 00:05:48.286 00:05:48.286 ' 00:05:48.286 23:26:36 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:48.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.286 --rc genhtml_branch_coverage=1 00:05:48.286 --rc genhtml_function_coverage=1 00:05:48.286 --rc genhtml_legend=1 00:05:48.286 --rc geninfo_all_blocks=1 00:05:48.286 --rc geninfo_unexecuted_blocks=1 00:05:48.286 00:05:48.286 ' 00:05:48.286 23:26:36 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:48.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.286 --rc genhtml_branch_coverage=1 00:05:48.286 --rc genhtml_function_coverage=1 00:05:48.286 --rc genhtml_legend=1 00:05:48.286 --rc geninfo_all_blocks=1 00:05:48.286 --rc geninfo_unexecuted_blocks=1 00:05:48.286 00:05:48.286 ' 00:05:48.286 23:26:36 version -- app/version.sh@17 -- # get_header_version major 00:05:48.286 23:26:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.287 23:26:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.287 23:26:36 version -- app/version.sh@14 -- # cut -f2 00:05:48.287 23:26:36 version -- app/version.sh@17 -- # major=25 00:05:48.287 23:26:36 version -- app/version.sh@18 -- # get_header_version minor 00:05:48.287 23:26:36 version -- app/version.sh@14 -- # cut -f2 00:05:48.287 23:26:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.287 23:26:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.287 23:26:36 version -- app/version.sh@18 -- # minor=1 00:05:48.287 23:26:36 version -- app/version.sh@19 -- # get_header_version patch 00:05:48.287 23:26:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.287 23:26:36 version -- app/version.sh@14 -- # cut -f2 00:05:48.287 23:26:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.287 23:26:36 version -- app/version.sh@19 -- # patch=0 00:05:48.287 23:26:36 version -- app/version.sh@20 -- # get_header_version suffix 00:05:48.287 23:26:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.287 23:26:36 version -- app/version.sh@14 -- # cut -f2 00:05:48.287 23:26:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.287 23:26:36 version -- app/version.sh@20 -- # suffix=-pre 00:05:48.287 23:26:36 version -- app/version.sh@22 -- # version=25.1 00:05:48.287 23:26:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:48.287 23:26:36 version -- app/version.sh@28 -- # version=25.1rc0 00:05:48.287 23:26:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:48.287 23:26:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:48.287 23:26:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:48.287 23:26:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:48.287 ************************************ 00:05:48.287 END TEST version 00:05:48.287 ************************************ 00:05:48.287 00:05:48.287 real 0m0.211s 00:05:48.287 user 0m0.141s 00:05:48.287 sys 0m0.095s 00:05:48.287 23:26:36 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.287 23:26:36 version -- common/autotest_common.sh@10 -- # set +x 00:05:48.287 23:26:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:48.287 23:26:36 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:48.287 23:26:36 -- spdk/autotest.sh@194 -- # uname -s 00:05:48.287 23:26:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:48.287 23:26:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:48.287 23:26:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:48.287 23:26:36 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:48.287 23:26:36 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:48.287 23:26:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:48.287 23:26:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.287 23:26:36 -- common/autotest_common.sh@10 -- # set +x 00:05:48.287 ************************************ 00:05:48.287 START TEST blockdev_nvme 00:05:48.287 ************************************ 00:05:48.287 23:26:36 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:48.549 * Looking for test storage... 00:05:48.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.549 23:26:36 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.549 --rc genhtml_branch_coverage=1 00:05:48.549 --rc genhtml_function_coverage=1 00:05:48.549 --rc genhtml_legend=1 00:05:48.549 --rc geninfo_all_blocks=1 00:05:48.549 --rc geninfo_unexecuted_blocks=1 00:05:48.549 00:05:48.549 ' 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.549 --rc genhtml_branch_coverage=1 00:05:48.549 --rc genhtml_function_coverage=1 00:05:48.549 --rc genhtml_legend=1 00:05:48.549 --rc geninfo_all_blocks=1 00:05:48.549 --rc geninfo_unexecuted_blocks=1 00:05:48.549 00:05:48.549 ' 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.549 --rc genhtml_branch_coverage=1 00:05:48.549 --rc genhtml_function_coverage=1 00:05:48.549 --rc genhtml_legend=1 00:05:48.549 --rc geninfo_all_blocks=1 00:05:48.549 --rc geninfo_unexecuted_blocks=1 00:05:48.549 00:05:48.549 ' 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:48.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.549 --rc genhtml_branch_coverage=1 00:05:48.549 --rc genhtml_function_coverage=1 00:05:48.549 --rc genhtml_legend=1 00:05:48.549 --rc geninfo_all_blocks=1 00:05:48.549 --rc geninfo_unexecuted_blocks=1 00:05:48.549 00:05:48.549 ' 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:48.549 23:26:36 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60251 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60251 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60251 ']' 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.549 23:26:36 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.549 23:26:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.549 [2024-09-28 23:26:36.660966] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:48.549 [2024-09-28 23:26:36.661238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60251 ] 00:05:48.808 [2024-09-28 23:26:36.808246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.067 [2024-09-28 23:26:36.980143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.634 23:26:37 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.634 23:26:37 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:05:49.634 23:26:37 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:49.634 23:26:37 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:49.634 23:26:37 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:49.634 23:26:37 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:49.634 23:26:37 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:49.634 23:26:37 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:49.634 23:26:37 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.634 23:26:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.896 23:26:37 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.896 23:26:37 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:49.896 23:26:37 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.896 23:26:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.896 23:26:37 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.896 23:26:37 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:49.896 23:26:37 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:49.896 23:26:37 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.896 23:26:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.896 23:26:37 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.896 23:26:37 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:49.896 23:26:37 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.896 23:26:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ffa50498-48c5-4f8d-afa8-c0759c243fd6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ffa50498-48c5-4f8d-afa8-c0759c243fd6",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "fa329ffd-37c7-44e5-8973-3c8869da56d0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fa329ffd-37c7-44e5-8973-3c8869da56d0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bd333093-a976-4fe7-bf8c-dd0483c3d90f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bd333093-a976-4fe7-bf8c-dd0483c3d90f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a5a9d134-b0a2-4eb9-936c-a7b7dfcb5dc3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a5a9d134-b0a2-4eb9-936c-a7b7dfcb5dc3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ae154606-ce28-4299-bbfc-3cda41077be8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ae154606-ce28-4299-bbfc-3cda41077be8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "edbf85bb-0bc4-46ff-b250-42e45ac92d0a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "edbf85bb-0bc4-46ff-b250-42e45ac92d0a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:49.897 23:26:37 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60251 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60251 ']' 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60251 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60251 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.897 killing process with pid 60251 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60251' 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60251 00:05:49.897 23:26:37 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60251 00:05:51.813 23:26:39 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:51.813 23:26:39 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:51.813 23:26:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:05:51.813 23:26:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.813 23:26:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:51.813 ************************************ 00:05:51.813 START TEST bdev_hello_world 00:05:51.813 ************************************ 00:05:51.814 23:26:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:51.814 [2024-09-28 23:26:39.885535] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:51.814 [2024-09-28 23:26:39.885654] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60335 ] 00:05:52.073 [2024-09-28 23:26:40.031865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.073 [2024-09-28 23:26:40.212974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.640 [2024-09-28 23:26:40.722090] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:52.640 [2024-09-28 23:26:40.722132] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:52.640 [2024-09-28 23:26:40.722149] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:52.640 [2024-09-28 23:26:40.724287] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:52.640 [2024-09-28 23:26:40.724966] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:52.640 [2024-09-28 23:26:40.724990] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:52.640 [2024-09-28 23:26:40.725215] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:52.640 00:05:52.640 [2024-09-28 23:26:40.725233] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:53.571 ************************************ 00:05:53.571 END TEST bdev_hello_world 00:05:53.571 ************************************ 00:05:53.571 00:05:53.571 real 0m1.584s 00:05:53.571 user 0m1.276s 00:05:53.571 sys 0m0.201s 00:05:53.571 23:26:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.571 23:26:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 23:26:41 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:53.571 23:26:41 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:53.571 23:26:41 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.572 23:26:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.572 ************************************ 00:05:53.572 START TEST bdev_bounds 00:05:53.572 ************************************ 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60372 00:05:53.572 Process bdevio pid: 60372 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60372' 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60372 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 60372 ']' 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.572 23:26:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:53.572 [2024-09-28 23:26:41.541760] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:53.572 [2024-09-28 23:26:41.541885] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60372 ] 00:05:53.572 [2024-09-28 23:26:41.691937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.830 [2024-09-28 23:26:41.876505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.830 [2024-09-28 23:26:41.876609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.830 [2024-09-28 23:26:41.876744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.396 23:26:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.396 23:26:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:05:54.396 23:26:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:54.396 I/O targets: 00:05:54.396 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:54.396 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:54.396 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:54.396 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:54.396 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:54.396 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:54.396 00:05:54.396 00:05:54.396 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.396 http://cunit.sourceforge.net/ 00:05:54.396 00:05:54.396 00:05:54.396 Suite: bdevio tests on: Nvme3n1 00:05:54.396 Test: blockdev write read block ...passed 00:05:54.396 Test: blockdev write zeroes read block ...passed 00:05:54.396 Test: blockdev write zeroes read no split ...passed 00:05:54.396 Test: blockdev write zeroes read split ...passed 00:05:54.396 Test: blockdev write zeroes read split partial ...passed 00:05:54.396 Test: blockdev reset ...[2024-09-28 23:26:42.555915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:05:54.396 passed 00:05:54.396 Test: blockdev write read 8 blocks ...[2024-09-28 23:26:42.558957] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:54.396 passed 00:05:54.396 Test: blockdev write read size > 128k ...passed 00:05:54.396 Test: blockdev write read invalid size ...passed 00:05:54.396 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:54.396 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:54.396 Test: blockdev write read max offset ...passed 00:05:54.396 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:54.656 Test: blockdev writev readv 8 blocks ...passed 00:05:54.656 Test: blockdev writev readv 30 x 1block ...passed 00:05:54.656 Test: blockdev writev readv block ...passed 00:05:54.656 Test: blockdev writev readv size > 128k ...passed 00:05:54.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:54.656 Test: blockdev comparev and writev ...[2024-09-28 23:26:42.566505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac00a000 len:0x1000 00:05:54.656 [2024-09-28 23:26:42.566565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:54.656 passed 00:05:54.656 Test: blockdev nvme passthru rw ...passed 00:05:54.656 Test: blockdev nvme passthru vendor specific ...passed 00:05:54.656 Test: blockdev nvme admin passthru ...[2024-09-28 23:26:42.567169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:54.656 [2024-09-28 23:26:42.567201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:54.656 passed 00:05:54.656 Test: blockdev copy ...passed 00:05:54.656 Suite: bdevio tests on: Nvme2n3 00:05:54.656 Test: blockdev write read block ...passed 00:05:54.656 Test: blockdev write zeroes read block ...passed 00:05:54.656 Test: blockdev write zeroes read no split ...passed 00:05:54.656 Test: blockdev write zeroes read split ...passed 00:05:54.656 Test: blockdev write zeroes read split partial ...passed 00:05:54.656 Test: blockdev reset ...[2024-09-28 23:26:42.641890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:54.656 [2024-09-28 23:26:42.646598] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:54.656 passed 00:05:54.656 Test: blockdev write read 8 blocks ...passed 00:05:54.656 Test: blockdev write read size > 128k ...passed 00:05:54.656 Test: blockdev write read invalid size ...passed 00:05:54.656 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:54.656 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:54.656 Test: blockdev write read max offset ...passed 00:05:54.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:54.656 Test: blockdev writev readv 8 blocks ...passed 00:05:54.656 Test: blockdev writev readv 30 x 1block ...passed 00:05:54.656 Test: blockdev writev readv block ...passed 00:05:54.656 Test: blockdev writev readv size > 128k ...passed 00:05:54.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:54.656 Test: blockdev comparev and writev ...[2024-09-28 23:26:42.654639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28c404000 len:0x1000 00:05:54.656 [2024-09-28 23:26:42.654682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:54.656 passed 00:05:54.656 Test: blockdev nvme passthru rw ...passed 00:05:54.656 Test: blockdev nvme passthru vendor specific ...[2024-09-28 23:26:42.655916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:54.656 [2024-09-28 23:26:42.655941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:54.656 passed 00:05:54.656 Test: blockdev nvme admin passthru ...passed 00:05:54.656 Test: blockdev copy ...passed 00:05:54.656 Suite: bdevio tests on: Nvme2n2 00:05:54.656 Test: blockdev write read block ...passed 00:05:54.656 Test: blockdev write zeroes read block ...passed 00:05:54.656 Test: blockdev write zeroes read no split ...passed 00:05:54.656 Test: blockdev write zeroes read split ...passed 00:05:54.657 Test: blockdev write zeroes read split partial ...passed 00:05:54.657 Test: blockdev reset ...[2024-09-28 23:26:42.756970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:54.657 [2024-09-28 23:26:42.761431] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:54.657 passed 00:05:54.657 Test: blockdev write read 8 blocks ...passed 00:05:54.657 Test: blockdev write read size > 128k ...passed 00:05:54.657 Test: blockdev write read invalid size ...passed 00:05:54.657 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:54.657 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:54.657 Test: blockdev write read max offset ...passed 00:05:54.657 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:54.657 Test: blockdev writev readv 8 blocks ...passed 00:05:54.657 Test: blockdev writev readv 30 x 1block ...passed 00:05:54.657 Test: blockdev writev readv block ...passed 00:05:54.657 Test: blockdev writev readv size > 128k ...passed 00:05:54.657 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:54.657 Test: blockdev comparev and writev ...[2024-09-28 23:26:42.779863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0c3a000 len:0x1000 00:05:54.657 [2024-09-28 23:26:42.780028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:54.657 passed 00:05:54.657 Test: blockdev nvme passthru rw ...passed 00:05:54.657 Test: blockdev nvme passthru vendor specific ...[2024-09-28 23:26:42.782481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:05:54.657 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:05:54.657 [2024-09-28 23:26:42.782564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:54.657 passed 00:05:54.657 Test: blockdev copy ...passed 00:05:54.657 Suite: bdevio tests on: Nvme2n1 00:05:54.657 Test: blockdev write read block ...passed 00:05:54.657 Test: blockdev write zeroes read block ...passed 00:05:54.657 Test: blockdev write zeroes read no split ...passed 00:05:54.919 Test: blockdev write zeroes read split ...passed 00:05:54.919 Test: blockdev write zeroes read split partial ...passed 00:05:54.919 Test: blockdev reset ...[2024-09-28 23:26:42.844316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:54.919 [2024-09-28 23:26:42.849535] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:54.919 passed 00:05:54.919 Test: blockdev write read 8 blocks ...passed 00:05:54.919 Test: blockdev write read size > 128k ...passed 00:05:54.919 Test: blockdev write read invalid size ...passed 00:05:54.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:54.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:54.919 Test: blockdev write read max offset ...passed 00:05:54.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:54.919 Test: blockdev writev readv 8 blocks ...passed 00:05:54.919 Test: blockdev writev readv 30 x 1block ...passed 00:05:54.919 Test: blockdev writev readv block ...passed 00:05:54.919 Test: blockdev writev readv size > 128k ...passed 00:05:54.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:54.919 Test: blockdev comparev and writev ...[2024-09-28 23:26:42.868176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0c34000 len:0x1000 00:05:54.919 [2024-09-28 23:26:42.868405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:54.919 passed 00:05:54.919 Test: blockdev nvme passthru rw ...passed 00:05:54.919 Test: blockdev nvme passthru vendor specific ...[2024-09-28 23:26:42.870961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:54.919 [2024-09-28 23:26:42.871050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:54.919 passed 00:05:54.919 Test: blockdev nvme admin passthru ...passed 00:05:54.919 Test: blockdev copy ...passed 00:05:54.919 Suite: bdevio tests on: Nvme1n1 00:05:54.919 Test: blockdev write read block ...passed 00:05:54.919 Test: blockdev write zeroes read block ...passed 00:05:54.919 Test: blockdev write zeroes read no split ...passed 00:05:54.919 Test: blockdev write zeroes read split ...passed 00:05:54.919 Test: blockdev write zeroes read split partial ...passed 00:05:54.919 Test: blockdev reset ...[2024-09-28 23:26:42.930777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:05:54.919 [2024-09-28 23:26:42.935907] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:54.919 passed 00:05:54.919 Test: blockdev write read 8 blocks ...passed 00:05:54.919 Test: blockdev write read size > 128k ...passed 00:05:54.919 Test: blockdev write read invalid size ...passed 00:05:54.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:54.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:54.919 Test: blockdev write read max offset ...passed 00:05:54.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:54.919 Test: blockdev writev readv 8 blocks ...passed 00:05:54.919 Test: blockdev writev readv 30 x 1block ...passed 00:05:54.919 Test: blockdev writev readv block ...passed 00:05:54.919 Test: blockdev writev readv size > 128k ...passed 00:05:54.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:54.919 Test: blockdev comparev and writev ...[2024-09-28 23:26:42.955059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0c30000 len:0x1000 00:05:54.919 [2024-09-28 23:26:42.955128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:54.919 passed 00:05:54.919 Test: blockdev nvme passthru rw ...passed 00:05:54.919 Test: blockdev nvme passthru vendor specific ...passed 00:05:54.919 Test: blockdev nvme admin passthru ...[2024-09-28 23:26:42.957908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:54.919 [2024-09-28 23:26:42.957969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:54.919 passed 00:05:54.919 Test: blockdev copy ...passed 00:05:54.919 Suite: bdevio tests on: Nvme0n1 00:05:54.919 Test: blockdev write read block ...passed 00:05:54.919 Test: blockdev write zeroes read block ...passed 00:05:54.919 Test: blockdev write zeroes read no split ...passed 00:05:54.919 Test: blockdev write zeroes read split ...passed 00:05:54.919 Test: blockdev write zeroes read split partial ...passed 00:05:54.919 Test: blockdev reset ...[2024-09-28 23:26:43.021499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:05:54.919 [2024-09-28 23:26:43.026186] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:54.919 passed 00:05:54.919 Test: blockdev write read 8 blocks ...passed 00:05:54.919 Test: blockdev write read size > 128k ...passed 00:05:54.919 Test: blockdev write read invalid size ...passed 00:05:54.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:54.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:54.919 Test: blockdev write read max offset ...passed 00:05:54.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:54.919 Test: blockdev writev readv 8 blocks ...passed 00:05:54.919 Test: blockdev writev readv 30 x 1block ...passed 00:05:54.919 Test: blockdev writev readv block ...passed 00:05:54.919 Test: blockdev writev readv size > 128k ...passed 00:05:54.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:54.919 Test: blockdev comparev and writev ...passed 00:05:54.919 Test: blockdev nvme passthru rw ...[2024-09-28 23:26:43.043540] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:54.919 separate metadata which is not supported yet. 00:05:54.919 passed 00:05:54.919 Test: blockdev nvme passthru vendor specific ...passed 00:05:54.919 Test: blockdev nvme admin passthru ...[2024-09-28 23:26:43.045113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:54.919 [2024-09-28 23:26:43.045180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:54.919 passed 00:05:54.919 Test: blockdev copy ...passed 00:05:54.919 00:05:54.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.919 suites 6 6 n/a 0 0 00:05:54.919 tests 138 138 138 0 0 00:05:54.919 asserts 893 893 893 0 n/a 00:05:54.919 00:05:54.919 Elapsed time = 1.348 seconds 00:05:54.919 0 00:05:54.919 23:26:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60372 00:05:54.919 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 60372 ']' 00:05:54.919 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 60372 00:05:54.919 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:05:54.919 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.919 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60372 00:05:55.180 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.180 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.180 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60372' 00:05:55.180 killing process with pid 60372 00:05:55.180 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 60372 00:05:55.180 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 60372 00:05:55.751 23:26:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:55.751 00:05:55.751 real 0m2.431s 00:05:55.751 user 0m5.780s 00:05:55.751 sys 0m0.312s 00:05:55.751 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.751 23:26:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:55.751 ************************************ 00:05:55.751 END TEST bdev_bounds 00:05:55.751 ************************************ 00:05:56.011 23:26:43 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:56.011 23:26:43 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:56.011 23:26:43 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.011 23:26:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:56.011 ************************************ 00:05:56.011 START TEST bdev_nbd 00:05:56.011 ************************************ 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60431 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:56.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60431 /var/tmp/spdk-nbd.sock 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 60431 ']' 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.011 23:26:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:56.011 [2024-09-28 23:26:44.061691] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:56.011 [2024-09-28 23:26:44.061832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:56.271 [2024-09-28 23:26:44.212280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.551 [2024-09-28 23:26:44.472972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.120 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.379 1+0 records in 00:05:57.379 1+0 records out 00:05:57.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137915 s, 3.0 MB/s 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.379 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.639 1+0 records in 00:05:57.639 1+0 records out 00:05:57.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116079 s, 3.5 MB/s 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.639 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.900 1+0 records in 00:05:57.900 1+0 records out 00:05:57.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0015335 s, 2.7 MB/s 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.900 23:26:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.161 1+0 records in 00:05:58.161 1+0 records out 00:05:58.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107319 s, 3.8 MB/s 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:58.161 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.421 1+0 records in 00:05:58.421 1+0 records out 00:05:58.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108614 s, 3.8 MB/s 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:58.421 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.681 1+0 records in 00:05:58.681 1+0 records out 00:05:58.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562889 s, 7.3 MB/s 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:58.681 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd0", 00:05:58.941 "bdev_name": "Nvme0n1" 00:05:58.941 }, 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd1", 00:05:58.941 "bdev_name": "Nvme1n1" 00:05:58.941 }, 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd2", 00:05:58.941 "bdev_name": "Nvme2n1" 00:05:58.941 }, 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd3", 00:05:58.941 "bdev_name": "Nvme2n2" 00:05:58.941 }, 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd4", 00:05:58.941 "bdev_name": "Nvme2n3" 00:05:58.941 }, 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd5", 00:05:58.941 "bdev_name": "Nvme3n1" 00:05:58.941 } 00:05:58.941 ]' 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd0", 00:05:58.941 "bdev_name": "Nvme0n1" 00:05:58.941 }, 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd1", 00:05:58.941 "bdev_name": "Nvme1n1" 00:05:58.941 }, 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd2", 00:05:58.941 "bdev_name": "Nvme2n1" 00:05:58.941 }, 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd3", 00:05:58.941 "bdev_name": "Nvme2n2" 00:05:58.941 }, 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd4", 00:05:58.941 "bdev_name": "Nvme2n3" 00:05:58.941 }, 00:05:58.941 { 00:05:58.941 "nbd_device": "/dev/nbd5", 00:05:58.941 "bdev_name": "Nvme3n1" 00:05:58.941 } 00:05:58.941 ]' 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.941 23:26:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.202 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:59.462 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:59.462 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:59.462 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:59.462 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.462 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.462 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:59.462 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.462 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.462 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.462 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:59.722 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:59.722 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:59.722 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:59.722 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.722 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.722 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:59.722 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.722 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.722 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.722 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:59.982 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:59.982 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:59.982 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:59.982 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.982 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.982 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:59.982 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.982 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.983 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.983 23:26:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.240 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:00.499 /dev/nbd0 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:00.499 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:00.500 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.500 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.500 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:00.500 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:00.500 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.500 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.500 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.500 1+0 records in 00:06:00.500 1+0 records out 00:06:00.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103459 s, 4.0 MB/s 00:06:00.500 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:00.758 /dev/nbd1 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.758 1+0 records in 00:06:00.758 1+0 records out 00:06:00.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00095115 s, 4.3 MB/s 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.758 23:26:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:01.021 /dev/nbd10 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.021 1+0 records in 00:06:01.021 1+0 records out 00:06:01.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104558 s, 3.9 MB/s 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:01.021 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:01.279 /dev/nbd11 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.279 1+0 records in 00:06:01.279 1+0 records out 00:06:01.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355212 s, 11.5 MB/s 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:01.279 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:01.537 /dev/nbd12 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.537 1+0 records in 00:06:01.537 1+0 records out 00:06:01.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000894142 s, 4.6 MB/s 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:01.537 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:01.795 /dev/nbd13 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.795 1+0 records in 00:06:01.795 1+0 records out 00:06:01.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647199 s, 6.3 MB/s 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.795 { 00:06:01.795 "nbd_device": "/dev/nbd0", 00:06:01.795 "bdev_name": "Nvme0n1" 00:06:01.795 }, 00:06:01.795 { 00:06:01.795 "nbd_device": "/dev/nbd1", 00:06:01.795 "bdev_name": "Nvme1n1" 00:06:01.795 }, 00:06:01.795 { 00:06:01.795 "nbd_device": "/dev/nbd10", 00:06:01.795 "bdev_name": "Nvme2n1" 00:06:01.795 }, 00:06:01.795 { 00:06:01.795 "nbd_device": "/dev/nbd11", 00:06:01.795 "bdev_name": "Nvme2n2" 00:06:01.795 }, 00:06:01.795 { 00:06:01.795 "nbd_device": "/dev/nbd12", 00:06:01.795 "bdev_name": "Nvme2n3" 00:06:01.795 }, 00:06:01.795 { 00:06:01.795 "nbd_device": "/dev/nbd13", 00:06:01.795 "bdev_name": "Nvme3n1" 00:06:01.795 } 00:06:01.795 ]' 00:06:01.795 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.795 { 00:06:01.796 "nbd_device": "/dev/nbd0", 00:06:01.796 "bdev_name": "Nvme0n1" 00:06:01.796 }, 00:06:01.796 { 00:06:01.796 "nbd_device": "/dev/nbd1", 00:06:01.796 "bdev_name": "Nvme1n1" 00:06:01.796 }, 00:06:01.796 { 00:06:01.796 "nbd_device": "/dev/nbd10", 00:06:01.796 "bdev_name": "Nvme2n1" 00:06:01.796 }, 00:06:01.796 { 00:06:01.796 "nbd_device": "/dev/nbd11", 00:06:01.796 "bdev_name": "Nvme2n2" 00:06:01.796 }, 00:06:01.796 { 00:06:01.796 "nbd_device": "/dev/nbd12", 00:06:01.796 "bdev_name": "Nvme2n3" 00:06:01.796 }, 00:06:01.796 { 00:06:01.796 "nbd_device": "/dev/nbd13", 00:06:01.796 "bdev_name": "Nvme3n1" 00:06:01.796 } 00:06:01.796 ]' 00:06:01.796 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.056 /dev/nbd1 00:06:02.056 /dev/nbd10 00:06:02.056 /dev/nbd11 00:06:02.056 /dev/nbd12 00:06:02.056 /dev/nbd13' 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.056 /dev/nbd1 00:06:02.056 /dev/nbd10 00:06:02.056 /dev/nbd11 00:06:02.056 /dev/nbd12 00:06:02.056 /dev/nbd13' 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.056 23:26:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:02.056 256+0 records in 00:06:02.056 256+0 records out 00:06:02.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520428 s, 201 MB/s 00:06:02.056 23:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.056 23:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.056 256+0 records in 00:06:02.056 256+0 records out 00:06:02.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183702 s, 5.7 MB/s 00:06:02.056 23:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.056 23:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.315 256+0 records in 00:06:02.315 256+0 records out 00:06:02.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200964 s, 5.2 MB/s 00:06:02.315 23:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.315 23:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:02.575 256+0 records in 00:06:02.575 256+0 records out 00:06:02.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232289 s, 4.5 MB/s 00:06:02.575 23:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.575 23:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:02.835 256+0 records in 00:06:02.835 256+0 records out 00:06:02.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237739 s, 4.4 MB/s 00:06:02.835 23:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.835 23:26:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:03.096 256+0 records in 00:06:03.096 256+0 records out 00:06:03.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.208425 s, 5.0 MB/s 00:06:03.096 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.096 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:03.355 256+0 records in 00:06:03.355 256+0 records out 00:06:03.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235876 s, 4.4 MB/s 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.355 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.614 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.614 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.614 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.614 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.614 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.614 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.614 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.614 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.614 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.614 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.872 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.872 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.872 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.872 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.872 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.872 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.872 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.872 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.872 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.872 23:26:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:03.872 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:03.872 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:03.872 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:03.872 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.872 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.872 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.130 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:04.389 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:04.389 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:04.389 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:04.389 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.389 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.389 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:04.389 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.389 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.389 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.389 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.647 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:04.906 23:26:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:05.164 malloc_lvol_verify 00:06:05.164 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:05.423 82383361-c03f-40f3-8d15-e7c873449501 00:06:05.423 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:05.423 aadca6a7-dd8f-45ce-923a-db9c011b221b 00:06:05.423 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:05.681 /dev/nbd0 00:06:05.681 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:05.681 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:05.681 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:05.681 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:05.681 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:05.681 mke2fs 1.47.0 (5-Feb-2023) 00:06:05.681 Discarding device blocks: 0/4096 done 00:06:05.681 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:05.681 00:06:05.681 Allocating group tables: 0/1 done 00:06:05.682 Writing inode tables: 0/1 done 00:06:05.682 Creating journal (1024 blocks): done 00:06:05.682 Writing superblocks and filesystem accounting information: 0/1 done 00:06:05.682 00:06:05.682 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:05.682 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.682 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:05.682 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.682 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:05.682 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.682 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60431 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 60431 ']' 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 60431 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.940 23:26:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60431 00:06:05.940 23:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.940 23:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.940 killing process with pid 60431 00:06:05.940 23:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60431' 00:06:05.940 23:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 60431 00:06:05.940 23:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 60431 00:06:06.875 23:26:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:06.875 00:06:06.875 real 0m10.777s 00:06:06.875 user 0m14.646s 00:06:06.875 sys 0m3.420s 00:06:06.875 23:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.875 23:26:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:06.875 ************************************ 00:06:06.875 END TEST bdev_nbd 00:06:06.875 ************************************ 00:06:06.875 23:26:54 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:06.875 23:26:54 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:06.875 skipping fio tests on NVMe due to multi-ns failures. 00:06:06.875 23:26:54 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:06.875 23:26:54 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:06.875 23:26:54 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:06.875 23:26:54 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:06.875 23:26:54 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.875 23:26:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.875 ************************************ 00:06:06.875 START TEST bdev_verify 00:06:06.875 ************************************ 00:06:06.875 23:26:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:06.875 [2024-09-28 23:26:54.875120] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:06.875 [2024-09-28 23:26:54.875221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60816 ] 00:06:06.875 [2024-09-28 23:26:55.019645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.134 [2024-09-28 23:26:55.202315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.134 [2024-09-28 23:26:55.202367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.700 Running I/O for 5 seconds... 00:06:12.879 20864.00 IOPS, 81.50 MiB/s 21312.00 IOPS, 83.25 MiB/s 22080.00 IOPS, 86.25 MiB/s 22000.00 IOPS, 85.94 MiB/s 22028.80 IOPS, 86.05 MiB/s 00:06:12.879 Latency(us) 00:06:12.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:12.879 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0x0 length 0xbd0bd 00:06:12.879 Nvme0n1 : 5.06 1996.66 7.80 0.00 0.00 63940.70 12653.49 70173.93 00:06:12.879 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:12.879 Nvme0n1 : 5.06 1620.18 6.33 0.00 0.00 78767.46 17039.36 92758.65 00:06:12.879 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0x0 length 0xa0000 00:06:12.879 Nvme1n1 : 5.07 1996.07 7.80 0.00 0.00 63876.52 14720.39 65334.35 00:06:12.879 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0xa0000 length 0xa0000 00:06:12.879 Nvme1n1 : 5.06 1619.58 6.33 0.00 0.00 78455.25 20870.70 74610.22 00:06:12.879 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0x0 length 0x80000 00:06:12.879 Nvme2n1 : 5.07 1995.52 7.79 0.00 0.00 63767.16 17039.36 57671.68 00:06:12.879 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0x80000 length 0x80000 00:06:12.879 Nvme2n1 : 5.06 1619.08 6.32 0.00 0.00 78254.86 19559.98 74206.92 00:06:12.879 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0x0 length 0x80000 00:06:12.879 Nvme2n2 : 5.07 1994.89 7.79 0.00 0.00 63568.30 17039.36 56461.78 00:06:12.879 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0x80000 length 0x80000 00:06:12.879 Nvme2n2 : 5.07 1627.01 6.36 0.00 0.00 77713.36 4537.11 71787.13 00:06:12.879 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0x0 length 0x80000 00:06:12.879 Nvme2n3 : 5.07 1994.25 7.79 0.00 0.00 63466.37 16031.11 58478.28 00:06:12.879 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0x80000 length 0x80000 00:06:12.879 Nvme2n3 : 5.09 1635.54 6.39 0.00 0.00 77267.01 9830.40 67754.14 00:06:12.879 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0x0 length 0x20000 00:06:12.879 Nvme3n1 : 5.08 2002.84 7.82 0.00 0.00 63115.82 3932.16 60494.77 00:06:12.879 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:12.879 Verification LBA range: start 0x20000 length 0x20000 00:06:12.879 Nvme3n1 : 5.09 1635.11 6.39 0.00 0.00 77208.55 9225.45 74610.22 00:06:12.879 =================================================================================================================== 00:06:12.879 Total : 21736.73 84.91 0.00 0.00 70049.48 3932.16 92758.65 00:06:14.265 00:06:14.265 real 0m7.484s 00:06:14.265 user 0m13.731s 00:06:14.265 sys 0m0.267s 00:06:14.265 23:27:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.265 ************************************ 00:06:14.265 END TEST bdev_verify 00:06:14.265 ************************************ 00:06:14.265 23:27:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:14.265 23:27:02 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:14.265 23:27:02 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:14.265 23:27:02 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.265 23:27:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.265 ************************************ 00:06:14.265 START TEST bdev_verify_big_io 00:06:14.265 ************************************ 00:06:14.265 23:27:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:14.525 [2024-09-28 23:27:02.457294] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:14.525 [2024-09-28 23:27:02.457449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60916 ] 00:06:14.525 [2024-09-28 23:27:02.611298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.785 [2024-09-28 23:27:02.902066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.785 [2024-09-28 23:27:02.902184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.726 Running I/O for 5 seconds... 00:06:22.329 589.00 IOPS, 36.81 MiB/s 1898.00 IOPS, 118.62 MiB/s 2471.67 IOPS, 154.48 MiB/s 00:06:22.329 Latency(us) 00:06:22.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.329 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.329 Verification LBA range: start 0x0 length 0xbd0b 00:06:22.329 Nvme0n1 : 5.78 130.68 8.17 0.00 0.00 941711.24 17341.83 1193763.45 00:06:22.329 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.329 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:22.329 Nvme0n1 : 6.07 84.31 5.27 0.00 0.00 1452705.38 15526.99 1897115.96 00:06:22.329 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.329 Verification LBA range: start 0x0 length 0xa000 00:06:22.329 Nvme1n1 : 5.79 129.07 8.07 0.00 0.00 913115.50 74610.22 993727.41 00:06:22.329 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.329 Verification LBA range: start 0xa000 length 0xa000 00:06:22.329 Nvme1n1 : 6.07 81.49 5.09 0.00 0.00 1387154.02 96388.33 1535760.54 00:06:22.329 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.329 Verification LBA range: start 0x0 length 0x8000 00:06:22.329 Nvme2n1 : 5.79 132.65 8.29 0.00 0.00 867487.64 136314.88 838860.80 00:06:22.329 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.329 Verification LBA range: start 0x8000 length 0x8000 00:06:22.329 Nvme2n1 : 6.11 87.33 5.46 0.00 0.00 1229113.05 37708.41 1355082.83 00:06:22.330 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.330 Verification LBA range: start 0x0 length 0x8000 00:06:22.330 Nvme2n2 : 5.92 140.52 8.78 0.00 0.00 794793.90 49202.41 909841.33 00:06:22.330 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.330 Verification LBA range: start 0x8000 length 0x8000 00:06:22.330 Nvme2n2 : 6.17 103.70 6.48 0.00 0.00 991594.18 23996.26 1284102.30 00:06:22.330 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.330 Verification LBA range: start 0x0 length 0x8000 00:06:22.330 Nvme2n3 : 5.99 149.63 9.35 0.00 0.00 723314.22 45572.73 980821.86 00:06:22.330 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.330 Verification LBA range: start 0x8000 length 0x8000 00:06:22.330 Nvme2n3 : 6.32 151.78 9.49 0.00 0.00 650814.08 5066.44 1316366.18 00:06:22.330 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.330 Verification LBA range: start 0x0 length 0x2000 00:06:22.330 Nvme3n1 : 6.07 168.59 10.54 0.00 0.00 621747.03 523.03 1051802.39 00:06:22.330 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.330 Verification LBA range: start 0x2000 length 0x2000 00:06:22.330 Nvme3n1 : 6.60 310.48 19.41 0.00 0.00 302784.98 403.30 1342177.28 00:06:22.330 =================================================================================================================== 00:06:22.330 Total : 1670.26 104.39 0.00 0.00 780705.71 403.30 1897115.96 00:06:23.741 00:06:23.741 real 0m9.361s 00:06:23.741 user 0m17.405s 00:06:23.741 sys 0m0.339s 00:06:23.741 23:27:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.741 23:27:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:23.741 ************************************ 00:06:23.741 END TEST bdev_verify_big_io 00:06:23.741 ************************************ 00:06:23.741 23:27:11 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:23.741 23:27:11 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:23.741 23:27:11 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.741 23:27:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.741 ************************************ 00:06:23.741 START TEST bdev_write_zeroes 00:06:23.741 ************************************ 00:06:23.742 23:27:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:23.742 [2024-09-28 23:27:11.859206] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:23.742 [2024-09-28 23:27:11.859302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61039 ] 00:06:24.000 [2024-09-28 23:27:11.999676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.258 [2024-09-28 23:27:12.177322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.829 Running I/O for 1 seconds... 00:06:25.768 60288.00 IOPS, 235.50 MiB/s 00:06:25.768 Latency(us) 00:06:25.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.768 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:25.768 Nvme0n1 : 1.02 10040.09 39.22 0.00 0.00 12718.45 4940.41 27424.30 00:06:25.768 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:25.768 Nvme1n1 : 1.02 10028.59 39.17 0.00 0.00 12719.62 9023.80 20366.57 00:06:25.768 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:25.768 Nvme2n1 : 1.02 10017.21 39.13 0.00 0.00 12687.58 8922.98 19862.45 00:06:25.768 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:25.768 Nvme2n2 : 1.02 10004.82 39.08 0.00 0.00 12680.94 9326.28 19459.15 00:06:25.768 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:25.768 Nvme2n3 : 1.02 9993.51 39.04 0.00 0.00 12674.10 9074.22 19156.68 00:06:25.768 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:25.768 Nvme3n1 : 1.03 9982.23 38.99 0.00 0.00 12669.71 8872.57 20064.10 00:06:25.768 =================================================================================================================== 00:06:25.768 Total : 60066.44 234.63 0.00 0.00 12691.73 4940.41 27424.30 00:06:26.709 00:06:26.709 real 0m2.818s 00:06:26.709 user 0m2.505s 00:06:26.709 sys 0m0.200s 00:06:26.709 23:27:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.709 ************************************ 00:06:26.709 END TEST bdev_write_zeroes 00:06:26.709 ************************************ 00:06:26.709 23:27:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:26.709 23:27:14 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:26.709 23:27:14 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:26.709 23:27:14 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.709 23:27:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:26.709 ************************************ 00:06:26.709 START TEST bdev_json_nonenclosed 00:06:26.709 ************************************ 00:06:26.709 23:27:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:26.709 [2024-09-28 23:27:14.757816] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:26.709 [2024-09-28 23:27:14.757945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61092 ] 00:06:26.969 [2024-09-28 23:27:14.910903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.230 [2024-09-28 23:27:15.152621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.230 [2024-09-28 23:27:15.152726] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:27.230 [2024-09-28 23:27:15.152746] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:27.230 [2024-09-28 23:27:15.152758] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.490 00:06:27.490 real 0m0.820s 00:06:27.490 user 0m0.584s 00:06:27.490 sys 0m0.128s 00:06:27.490 23:27:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.490 ************************************ 00:06:27.490 END TEST bdev_json_nonenclosed 00:06:27.490 ************************************ 00:06:27.490 23:27:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:27.490 23:27:15 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:27.490 23:27:15 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:27.490 23:27:15 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.490 23:27:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.490 ************************************ 00:06:27.490 START TEST bdev_json_nonarray 00:06:27.490 ************************************ 00:06:27.490 23:27:15 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:27.751 [2024-09-28 23:27:15.664392] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:27.751 [2024-09-28 23:27:15.664581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61123 ] 00:06:27.751 [2024-09-28 23:27:15.820538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.011 [2024-09-28 23:27:16.103890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.012 [2024-09-28 23:27:16.104027] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:28.012 [2024-09-28 23:27:16.104051] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:28.012 [2024-09-28 23:27:16.104064] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.584 00:06:28.584 real 0m0.887s 00:06:28.584 user 0m0.624s 00:06:28.584 sys 0m0.153s 00:06:28.584 23:27:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.584 23:27:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:28.584 ************************************ 00:06:28.584 END TEST bdev_json_nonarray 00:06:28.584 ************************************ 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:28.584 23:27:16 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:28.584 00:06:28.584 real 0m40.098s 00:06:28.584 user 0m59.948s 00:06:28.584 sys 0m5.823s 00:06:28.584 23:27:16 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.584 ************************************ 00:06:28.584 END TEST blockdev_nvme 00:06:28.584 ************************************ 00:06:28.584 23:27:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:28.584 23:27:16 -- spdk/autotest.sh@209 -- # uname -s 00:06:28.584 23:27:16 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:28.584 23:27:16 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:28.584 23:27:16 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:28.584 23:27:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.584 23:27:16 -- common/autotest_common.sh@10 -- # set +x 00:06:28.585 ************************************ 00:06:28.585 START TEST blockdev_nvme_gpt 00:06:28.585 ************************************ 00:06:28.585 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:28.585 * Looking for test storage... 00:06:28.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:28.585 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:28.585 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:06:28.585 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:28.847 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.847 23:27:16 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:28.847 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.847 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:28.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.847 --rc genhtml_branch_coverage=1 00:06:28.847 --rc genhtml_function_coverage=1 00:06:28.847 --rc genhtml_legend=1 00:06:28.848 --rc geninfo_all_blocks=1 00:06:28.848 --rc geninfo_unexecuted_blocks=1 00:06:28.848 00:06:28.848 ' 00:06:28.848 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:28.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.848 --rc genhtml_branch_coverage=1 00:06:28.848 --rc genhtml_function_coverage=1 00:06:28.848 --rc genhtml_legend=1 00:06:28.848 --rc geninfo_all_blocks=1 00:06:28.848 --rc geninfo_unexecuted_blocks=1 00:06:28.848 00:06:28.848 ' 00:06:28.848 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:28.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.848 --rc genhtml_branch_coverage=1 00:06:28.848 --rc genhtml_function_coverage=1 00:06:28.848 --rc genhtml_legend=1 00:06:28.848 --rc geninfo_all_blocks=1 00:06:28.848 --rc geninfo_unexecuted_blocks=1 00:06:28.848 00:06:28.848 ' 00:06:28.848 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:28.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.848 --rc genhtml_branch_coverage=1 00:06:28.848 --rc genhtml_function_coverage=1 00:06:28.848 --rc genhtml_legend=1 00:06:28.848 --rc geninfo_all_blocks=1 00:06:28.848 --rc geninfo_unexecuted_blocks=1 00:06:28.848 00:06:28.848 ' 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61207 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:28.848 23:27:16 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61207 00:06:28.848 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61207 ']' 00:06:28.848 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.848 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.848 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.848 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.848 23:27:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:28.848 [2024-09-28 23:27:16.885827] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:28.848 [2024-09-28 23:27:16.886239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61207 ] 00:06:29.109 [2024-09-28 23:27:17.045549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.372 [2024-09-28 23:27:17.337718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.315 23:27:18 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.315 23:27:18 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:30.315 23:27:18 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:30.315 23:27:18 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:30.315 23:27:18 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:30.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:30.576 Waiting for block devices as requested 00:06:30.576 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:30.836 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:30.836 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:30.836 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.186 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:36.186 BYT; 00:06:36.186 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:36.186 BYT; 00:06:36.186 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:36.186 23:27:24 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:36.186 23:27:24 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:37.128 The operation has completed successfully. 00:06:37.128 23:27:25 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:38.518 The operation has completed successfully. 00:06:38.518 23:27:26 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:38.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:39.351 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:39.351 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:39.351 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:39.351 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:39.351 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:39.351 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.351 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.351 [] 00:06:39.351 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.351 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:39.351 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:39.351 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:39.351 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:39.613 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:39.613 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.613 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.875 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.875 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:39.875 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.875 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.875 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.875 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:39.875 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:39.875 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "f16501a7-f960-40fe-a12f-a88c01387426"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f16501a7-f960-40fe-a12f-a88c01387426",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "23d3a10e-5e1b-4ac8-b631-127d1784281b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23d3a10e-5e1b-4ac8-b631-127d1784281b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "44432fad-ac55-420c-8f1f-d97ddf3fbebf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "44432fad-ac55-420c-8f1f-d97ddf3fbebf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9f5dc3a7-3ceb-4503-aa3b-1cfa45ca6592"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9f5dc3a7-3ceb-4503-aa3b-1cfa45ca6592",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bb7a89a6-5e9f-4e16-82a6-900d616ec172"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bb7a89a6-5e9f-4e16-82a6-900d616ec172",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:39.876 23:27:27 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61207 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61207 ']' 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61207 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:06:39.876 23:27:27 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.876 23:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61207 00:06:39.876 killing process with pid 61207 00:06:39.877 23:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.877 23:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.877 23:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61207' 00:06:39.877 23:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61207 00:06:39.877 23:27:28 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61207 00:06:41.792 23:27:29 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:41.792 23:27:29 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:41.792 23:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:41.792 23:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.792 23:27:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.792 ************************************ 00:06:41.792 START TEST bdev_hello_world 00:06:41.792 ************************************ 00:06:41.792 23:27:29 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:42.051 [2024-09-28 23:27:29.974659] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:42.051 [2024-09-28 23:27:29.974919] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61839 ] 00:06:42.052 [2024-09-28 23:27:30.123457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.310 [2024-09-28 23:27:30.316539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.876 [2024-09-28 23:27:30.829960] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:42.876 [2024-09-28 23:27:30.830148] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:42.876 [2024-09-28 23:27:30.830173] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:42.876 [2024-09-28 23:27:30.832212] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:42.876 [2024-09-28 23:27:30.832640] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:42.876 [2024-09-28 23:27:30.832663] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:42.876 [2024-09-28 23:27:30.832842] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:42.876 00:06:42.876 [2024-09-28 23:27:30.832859] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:43.444 00:06:43.444 real 0m1.620s 00:06:43.444 user 0m1.303s 00:06:43.444 sys 0m0.209s 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:43.444 ************************************ 00:06:43.444 END TEST bdev_hello_world 00:06:43.444 ************************************ 00:06:43.444 23:27:31 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:43.444 23:27:31 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:43.444 23:27:31 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.444 23:27:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.444 ************************************ 00:06:43.444 START TEST bdev_bounds 00:06:43.444 ************************************ 00:06:43.444 Process bdevio pid: 61881 00:06:43.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61881 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61881' 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61881 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61881 ']' 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.444 23:27:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:43.703 [2024-09-28 23:27:31.650192] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:43.703 [2024-09-28 23:27:31.650315] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61881 ] 00:06:43.703 [2024-09-28 23:27:31.799076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.962 [2024-09-28 23:27:32.000474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.962 [2024-09-28 23:27:32.000562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.962 [2024-09-28 23:27:32.000591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.563 23:27:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.563 23:27:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:44.563 23:27:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:44.563 I/O targets: 00:06:44.563 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:44.563 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:44.563 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:44.563 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.563 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.563 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.563 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:44.563 00:06:44.564 00:06:44.564 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.564 http://cunit.sourceforge.net/ 00:06:44.564 00:06:44.564 00:06:44.564 Suite: bdevio tests on: Nvme3n1 00:06:44.564 Test: blockdev write read block ...passed 00:06:44.564 Test: blockdev write zeroes read block ...passed 00:06:44.564 Test: blockdev write zeroes read no split ...passed 00:06:44.564 Test: blockdev write zeroes read split ...passed 00:06:44.564 Test: blockdev write zeroes read split partial ...passed 00:06:44.564 Test: blockdev reset ...[2024-09-28 23:27:32.687045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:44.564 [2024-09-28 23:27:32.690013] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.564 passed 00:06:44.564 Test: blockdev write read 8 blocks ...passed 00:06:44.564 Test: blockdev write read size > 128k ...passed 00:06:44.564 Test: blockdev write read invalid size ...passed 00:06:44.564 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.564 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.564 Test: blockdev write read max offset ...passed 00:06:44.564 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.564 Test: blockdev writev readv 8 blocks ...passed 00:06:44.564 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.564 Test: blockdev writev readv block ...passed 00:06:44.564 Test: blockdev writev readv size > 128k ...passed 00:06:44.564 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.564 Test: blockdev comparev and writev ...[2024-09-28 23:27:32.699613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2abc06000 len:0x1000 00:06:44.564 [2024-09-28 23:27:32.699762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.564 passed 00:06:44.564 Test: blockdev nvme passthru rw ...passed 00:06:44.564 Test: blockdev nvme passthru vendor specific ...[2024-09-28 23:27:32.701029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.564 [2024-09-28 23:27:32.701133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.564 passed 00:06:44.564 Test: blockdev nvme admin passthru ...passed 00:06:44.564 Test: blockdev copy ...passed 00:06:44.564 Suite: bdevio tests on: Nvme2n3 00:06:44.564 Test: blockdev write read block ...passed 00:06:44.564 Test: blockdev write zeroes read block ...passed 00:06:44.564 Test: blockdev write zeroes read no split ...passed 00:06:44.823 Test: blockdev write zeroes read split ...passed 00:06:44.823 Test: blockdev write zeroes read split partial ...passed 00:06:44.824 Test: blockdev reset ...[2024-09-28 23:27:32.757668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:44.824 [2024-09-28 23:27:32.764234] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.824 passed 00:06:44.824 Test: blockdev write read 8 blocks ...passed 00:06:44.824 Test: blockdev write read size > 128k ...passed 00:06:44.824 Test: blockdev write read invalid size ...passed 00:06:44.824 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.824 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.824 Test: blockdev write read max offset ...passed 00:06:44.824 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.824 Test: blockdev writev readv 8 blocks ...passed 00:06:44.824 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.824 Test: blockdev writev readv block ...passed 00:06:44.824 Test: blockdev writev readv size > 128k ...passed 00:06:44.824 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.824 Test: blockdev comparev and writev ...[2024-09-28 23:27:32.778208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e623c000 len:0x1000 00:06:44.824 [2024-09-28 23:27:32.778250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.824 passed 00:06:44.824 Test: blockdev nvme passthru rw ...passed 00:06:44.824 Test: blockdev nvme passthru vendor specific ...[2024-09-28 23:27:32.779178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:44.824 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:44.824 [2024-09-28 23:27:32.779270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.824 passed 00:06:44.824 Test: blockdev copy ...passed 00:06:44.824 Suite: bdevio tests on: Nvme2n2 00:06:44.824 Test: blockdev write read block ...passed 00:06:44.824 Test: blockdev write zeroes read block ...passed 00:06:44.824 Test: blockdev write zeroes read no split ...passed 00:06:44.824 Test: blockdev write zeroes read split ...passed 00:06:44.824 Test: blockdev write zeroes read split partial ...passed 00:06:44.824 Test: blockdev reset ...[2024-09-28 23:27:32.842126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:44.824 [2024-09-28 23:27:32.849764] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.824 passed 00:06:44.824 Test: blockdev write read 8 blocks ...passed 00:06:44.824 Test: blockdev write read size > 128k ...passed 00:06:44.824 Test: blockdev write read invalid size ...passed 00:06:44.824 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.824 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.824 Test: blockdev write read max offset ...passed 00:06:44.824 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.824 Test: blockdev writev readv 8 blocks ...passed 00:06:44.824 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.824 Test: blockdev writev readv block ...passed 00:06:44.824 Test: blockdev writev readv size > 128k ...passed 00:06:44.824 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.824 Test: blockdev comparev and writev ...[2024-09-28 23:27:32.868250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e6236000 len:0x1000 00:06:44.824 [2024-09-28 23:27:32.868290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.824 passed 00:06:44.824 Test: blockdev nvme passthru rw ...passed 00:06:44.824 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.824 Test: blockdev nvme admin passthru ...[2024-09-28 23:27:32.870535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.824 [2024-09-28 23:27:32.870569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.824 passed 00:06:44.824 Test: blockdev copy ...passed 00:06:44.824 Suite: bdevio tests on: Nvme2n1 00:06:44.824 Test: blockdev write read block ...passed 00:06:44.824 Test: blockdev write zeroes read block ...passed 00:06:44.824 Test: blockdev write zeroes read no split ...passed 00:06:44.824 Test: blockdev write zeroes read split ...passed 00:06:44.824 Test: blockdev write zeroes read split partial ...passed 00:06:44.824 Test: blockdev reset ...[2024-09-28 23:27:32.923125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:44.824 [2024-09-28 23:27:32.926193] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.824 passed 00:06:44.824 Test: blockdev write read 8 blocks ...passed 00:06:44.824 Test: blockdev write read size > 128k ...passed 00:06:44.824 Test: blockdev write read invalid size ...passed 00:06:44.824 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.824 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.824 Test: blockdev write read max offset ...passed 00:06:44.824 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.824 Test: blockdev writev readv 8 blocks ...passed 00:06:44.824 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.824 Test: blockdev writev readv block ...passed 00:06:44.824 Test: blockdev writev readv size > 128k ...passed 00:06:44.824 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.824 Test: blockdev comparev and writev ...[2024-09-28 23:27:32.938861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e6232000 len:0x1000 00:06:44.824 [2024-09-28 23:27:32.938906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.824 passed 00:06:44.824 Test: blockdev nvme passthru rw ...passed 00:06:44.824 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.824 Test: blockdev nvme admin passthru ...[2024-09-28 23:27:32.940287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.824 [2024-09-28 23:27:32.940316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.824 passed 00:06:44.824 Test: blockdev copy ...passed 00:06:44.824 Suite: bdevio tests on: Nvme1n1p2 00:06:44.824 Test: blockdev write read block ...passed 00:06:44.824 Test: blockdev write zeroes read block ...passed 00:06:44.824 Test: blockdev write zeroes read no split ...passed 00:06:44.824 Test: blockdev write zeroes read split ...passed 00:06:45.085 Test: blockdev write zeroes read split partial ...passed 00:06:45.085 Test: blockdev reset ...[2024-09-28 23:27:32.995586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:45.085 [2024-09-28 23:27:32.998843] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:45.085 passed 00:06:45.085 Test: blockdev write read 8 blocks ...passed 00:06:45.085 Test: blockdev write read size > 128k ...passed 00:06:45.085 Test: blockdev write read invalid size ...passed 00:06:45.086 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.086 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.086 Test: blockdev write read max offset ...passed 00:06:45.086 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.086 Test: blockdev writev readv 8 blocks ...passed 00:06:45.086 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.086 Test: blockdev writev readv block ...passed 00:06:45.086 Test: blockdev writev readv size > 128k ...passed 00:06:45.086 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:45.086 Test: blockdev comparev and writev ...[2024-09-28 23:27:33.018240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e622e000 len:0x1000 00:06:45.086 [2024-09-28 23:27:33.018387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:45.086 passed 00:06:45.086 Test: blockdev nvme passthru rw ...passed 00:06:45.086 Test: blockdev nvme passthru vendor specific ...passed 00:06:45.086 Test: blockdev nvme admin passthru ...passed 00:06:45.086 Test: blockdev copy ...passed 00:06:45.086 Suite: bdevio tests on: Nvme1n1p1 00:06:45.086 Test: blockdev write read block ...passed 00:06:45.086 Test: blockdev write zeroes read block ...passed 00:06:45.086 Test: blockdev write zeroes read no split ...passed 00:06:45.086 Test: blockdev write zeroes read split ...passed 00:06:45.086 Test: blockdev write zeroes read split partial ...passed 00:06:45.086 Test: blockdev reset ...[2024-09-28 23:27:33.073845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:45.086 [2024-09-28 23:27:33.078018] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:45.086 passed 00:06:45.086 Test: blockdev write read 8 blocks ...passed 00:06:45.086 Test: blockdev write read size > 128k ...passed 00:06:45.086 Test: blockdev write read invalid size ...passed 00:06:45.086 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.086 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.086 Test: blockdev write read max offset ...passed 00:06:45.086 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.086 Test: blockdev writev readv 8 blocks ...passed 00:06:45.086 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.086 Test: blockdev writev readv block ...passed 00:06:45.086 Test: blockdev writev readv size > 128k ...passed 00:06:45.086 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:45.086 Test: blockdev comparev and writev ...[2024-09-28 23:27:33.099078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b1e0e000 len:0x1000 00:06:45.086 [2024-09-28 23:27:33.099182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:45.086 passed 00:06:45.086 Test: blockdev nvme passthru rw ...passed 00:06:45.086 Test: blockdev nvme passthru vendor specific ...passed 00:06:45.086 Test: blockdev nvme admin passthru ...passed 00:06:45.086 Test: blockdev copy ...passed 00:06:45.086 Suite: bdevio tests on: Nvme0n1 00:06:45.086 Test: blockdev write read block ...passed 00:06:45.086 Test: blockdev write zeroes read block ...passed 00:06:45.086 Test: blockdev write zeroes read no split ...passed 00:06:45.086 Test: blockdev write zeroes read split ...passed 00:06:45.086 Test: blockdev write zeroes read split partial ...passed 00:06:45.086 Test: blockdev reset ...[2024-09-28 23:27:33.157866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:45.086 [2024-09-28 23:27:33.162277] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:45.086 passed 00:06:45.086 Test: blockdev write read 8 blocks ...passed 00:06:45.086 Test: blockdev write read size > 128k ...passed 00:06:45.086 Test: blockdev write read invalid size ...passed 00:06:45.086 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.086 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.086 Test: blockdev write read max offset ...passed 00:06:45.086 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.086 Test: blockdev writev readv 8 blocks ...passed 00:06:45.086 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.086 Test: blockdev writev readv block ...passed 00:06:45.086 Test: blockdev writev readv size > 128k ...passed 00:06:45.086 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:45.086 Test: blockdev comparev and writev ...passed 00:06:45.086 Test: blockdev nvme passthru rw ...[2024-09-28 23:27:33.179906] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:45.086 separate metadata which is not supported yet. 00:06:45.086 passed 00:06:45.086 Test: blockdev nvme passthru vendor specific ...passed 00:06:45.086 Test: blockdev nvme admin passthru ...[2024-09-28 23:27:33.181634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:45.086 [2024-09-28 23:27:33.181694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:45.086 passed 00:06:45.086 Test: blockdev copy ...passed 00:06:45.086 00:06:45.086 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.086 suites 7 7 n/a 0 0 00:06:45.086 tests 161 161 161 0 0 00:06:45.086 asserts 1025 1025 1025 0 n/a 00:06:45.086 00:06:45.086 Elapsed time = 1.398 seconds 00:06:45.086 0 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61881 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61881 ']' 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61881 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61881 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61881' 00:06:45.086 killing process with pid 61881 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61881 00:06:45.086 23:27:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61881 00:06:46.027 23:27:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:46.027 00:06:46.027 real 0m2.560s 00:06:46.027 user 0m6.093s 00:06:46.027 sys 0m0.358s 00:06:46.027 23:27:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.027 ************************************ 00:06:46.027 END TEST bdev_bounds 00:06:46.027 ************************************ 00:06:46.027 23:27:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:46.288 23:27:34 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:46.288 23:27:34 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:46.288 23:27:34 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.288 23:27:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:46.288 ************************************ 00:06:46.288 START TEST bdev_nbd 00:06:46.288 ************************************ 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61940 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61940 /var/tmp/spdk-nbd.sock 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61940 ']' 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.288 23:27:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:46.288 [2024-09-28 23:27:34.300224] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:46.289 [2024-09-28 23:27:34.301200] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.289 [2024-09-28 23:27:34.453920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.862 [2024-09-28 23:27:34.759044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:47.433 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.694 1+0 records in 00:06:47.694 1+0 records out 00:06:47.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104404 s, 3.9 MB/s 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:47.694 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.955 1+0 records in 00:06:47.955 1+0 records out 00:06:47.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011955 s, 3.4 MB/s 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:47.955 23:27:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.215 1+0 records in 00:06:48.215 1+0 records out 00:06:48.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111446 s, 3.7 MB/s 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:48.215 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.475 1+0 records in 00:06:48.475 1+0 records out 00:06:48.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106357 s, 3.9 MB/s 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:48.475 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.736 1+0 records in 00:06:48.736 1+0 records out 00:06:48.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123366 s, 3.3 MB/s 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:48.736 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:48.997 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:48.997 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:48.997 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.998 1+0 records in 00:06:48.998 1+0 records out 00:06:48.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110085 s, 3.7 MB/s 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:48.998 23:27:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.258 1+0 records in 00:06:49.258 1+0 records out 00:06:49.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00099372 s, 4.1 MB/s 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:49.258 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd0", 00:06:49.519 "bdev_name": "Nvme0n1" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd1", 00:06:49.519 "bdev_name": "Nvme1n1p1" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd2", 00:06:49.519 "bdev_name": "Nvme1n1p2" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd3", 00:06:49.519 "bdev_name": "Nvme2n1" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd4", 00:06:49.519 "bdev_name": "Nvme2n2" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd5", 00:06:49.519 "bdev_name": "Nvme2n3" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd6", 00:06:49.519 "bdev_name": "Nvme3n1" 00:06:49.519 } 00:06:49.519 ]' 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd0", 00:06:49.519 "bdev_name": "Nvme0n1" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd1", 00:06:49.519 "bdev_name": "Nvme1n1p1" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd2", 00:06:49.519 "bdev_name": "Nvme1n1p2" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd3", 00:06:49.519 "bdev_name": "Nvme2n1" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd4", 00:06:49.519 "bdev_name": "Nvme2n2" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd5", 00:06:49.519 "bdev_name": "Nvme2n3" 00:06:49.519 }, 00:06:49.519 { 00:06:49.519 "nbd_device": "/dev/nbd6", 00:06:49.519 "bdev_name": "Nvme3n1" 00:06:49.519 } 00:06:49.519 ]' 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.519 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.781 23:27:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:50.042 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:50.042 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:50.042 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:50.042 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.042 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.042 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:50.042 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.042 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.042 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.042 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:50.303 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:50.303 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:50.303 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:50.303 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.303 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.303 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:50.303 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.303 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.303 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.303 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:50.565 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:50.565 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:50.565 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:50.565 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.565 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.565 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:50.565 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.565 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.565 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.565 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:50.826 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:50.826 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:50.826 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:50.826 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.826 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.826 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:50.826 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.826 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.826 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.826 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:51.087 23:27:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:51.087 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:51.087 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:51.087 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:51.088 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:51.348 /dev/nbd0 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.348 1+0 records in 00:06:51.348 1+0 records out 00:06:51.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597688 s, 6.9 MB/s 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:51.348 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:51.608 /dev/nbd1 00:06:51.608 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.608 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.608 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:51.608 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:51.608 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.608 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.608 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:51.608 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:51.608 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.608 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.609 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.609 1+0 records in 00:06:51.609 1+0 records out 00:06:51.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801017 s, 5.1 MB/s 00:06:51.609 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.609 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:51.609 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.609 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.609 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:51.609 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.609 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:51.609 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:51.888 /dev/nbd10 00:06:51.888 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:51.888 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:51.888 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:51.888 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:51.888 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.888 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.888 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:51.888 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:51.888 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.888 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.889 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.889 1+0 records in 00:06:51.889 1+0 records out 00:06:51.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834077 s, 4.9 MB/s 00:06:51.889 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.889 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:51.889 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.889 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.889 23:27:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:51.889 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.889 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:51.889 23:27:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:52.148 /dev/nbd11 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.148 1+0 records in 00:06:52.148 1+0 records out 00:06:52.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489888 s, 8.4 MB/s 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:52.148 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:52.406 /dev/nbd12 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.406 1+0 records in 00:06:52.406 1+0 records out 00:06:52.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676891 s, 6.1 MB/s 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:52.406 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:52.664 /dev/nbd13 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.664 1+0 records in 00:06:52.664 1+0 records out 00:06:52.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538723 s, 7.6 MB/s 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.664 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:52.665 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:52.923 /dev/nbd14 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.923 1+0 records in 00:06:52.923 1+0 records out 00:06:52.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442297 s, 9.3 MB/s 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.923 23:27:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd0", 00:06:53.182 "bdev_name": "Nvme0n1" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd1", 00:06:53.182 "bdev_name": "Nvme1n1p1" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd10", 00:06:53.182 "bdev_name": "Nvme1n1p2" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd11", 00:06:53.182 "bdev_name": "Nvme2n1" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd12", 00:06:53.182 "bdev_name": "Nvme2n2" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd13", 00:06:53.182 "bdev_name": "Nvme2n3" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd14", 00:06:53.182 "bdev_name": "Nvme3n1" 00:06:53.182 } 00:06:53.182 ]' 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd0", 00:06:53.182 "bdev_name": "Nvme0n1" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd1", 00:06:53.182 "bdev_name": "Nvme1n1p1" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd10", 00:06:53.182 "bdev_name": "Nvme1n1p2" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd11", 00:06:53.182 "bdev_name": "Nvme2n1" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd12", 00:06:53.182 "bdev_name": "Nvme2n2" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd13", 00:06:53.182 "bdev_name": "Nvme2n3" 00:06:53.182 }, 00:06:53.182 { 00:06:53.182 "nbd_device": "/dev/nbd14", 00:06:53.182 "bdev_name": "Nvme3n1" 00:06:53.182 } 00:06:53.182 ]' 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.182 /dev/nbd1 00:06:53.182 /dev/nbd10 00:06:53.182 /dev/nbd11 00:06:53.182 /dev/nbd12 00:06:53.182 /dev/nbd13 00:06:53.182 /dev/nbd14' 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.182 /dev/nbd1 00:06:53.182 /dev/nbd10 00:06:53.182 /dev/nbd11 00:06:53.182 /dev/nbd12 00:06:53.182 /dev/nbd13 00:06:53.182 /dev/nbd14' 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:53.182 256+0 records in 00:06:53.182 256+0 records out 00:06:53.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112368 s, 93.3 MB/s 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.182 256+0 records in 00:06:53.182 256+0 records out 00:06:53.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0790705 s, 13.3 MB/s 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.182 256+0 records in 00:06:53.182 256+0 records out 00:06:53.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0916851 s, 11.4 MB/s 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.182 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:53.441 256+0 records in 00:06:53.441 256+0 records out 00:06:53.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0793618 s, 13.2 MB/s 00:06:53.441 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.441 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:53.441 256+0 records in 00:06:53.441 256+0 records out 00:06:53.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0781271 s, 13.4 MB/s 00:06:53.441 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.441 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:53.441 256+0 records in 00:06:53.441 256+0 records out 00:06:53.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0720872 s, 14.5 MB/s 00:06:53.441 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.441 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:53.699 256+0 records in 00:06:53.699 256+0 records out 00:06:53.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.082295 s, 12.7 MB/s 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:53.699 256+0 records in 00:06:53.699 256+0 records out 00:06:53.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0740464 s, 14.2 MB/s 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:53.699 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.700 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.958 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.958 23:27:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.958 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.958 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.958 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.958 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.958 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.958 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.958 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.958 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.216 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.216 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.216 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.216 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.216 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.216 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.216 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.216 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.216 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.216 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.475 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:54.734 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:54.734 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:54.734 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:54.734 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.734 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.734 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:54.734 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.734 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.734 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.734 23:27:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:54.992 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:54.992 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:54.992 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:54.992 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.992 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.992 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:54.992 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.992 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.992 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.992 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.253 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:55.529 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:55.790 malloc_lvol_verify 00:06:55.790 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:56.052 2ea1b2ee-9b5f-4a84-97f2-8672e030878c 00:06:56.052 23:27:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:56.052 8b79c249-0f55-46ef-9503-737b3198fec9 00:06:56.052 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:56.312 /dev/nbd0 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:56.312 mke2fs 1.47.0 (5-Feb-2023) 00:06:56.312 Discarding device blocks: 0/4096 done 00:06:56.312 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:56.312 00:06:56.312 Allocating group tables: 0/1 done 00:06:56.312 Writing inode tables: 0/1 done 00:06:56.312 Creating journal (1024 blocks): done 00:06:56.312 Writing superblocks and filesystem accounting information: 0/1 done 00:06:56.312 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.312 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.572 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.572 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.572 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.572 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.572 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.572 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.572 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61940 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61940 ']' 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61940 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61940 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61940' 00:06:56.573 killing process with pid 61940 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61940 00:06:56.573 23:27:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61940 00:06:57.509 23:27:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:57.509 00:06:57.509 real 0m11.368s 00:06:57.509 user 0m15.986s 00:06:57.509 sys 0m3.728s 00:06:57.509 23:27:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.509 23:27:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:57.509 ************************************ 00:06:57.509 END TEST bdev_nbd 00:06:57.509 ************************************ 00:06:57.509 skipping fio tests on NVMe due to multi-ns failures. 00:06:57.509 23:27:45 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:57.509 23:27:45 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:57.509 23:27:45 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:57.509 23:27:45 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:57.509 23:27:45 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:57.509 23:27:45 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:57.509 23:27:45 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:57.509 23:27:45 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.509 23:27:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.509 ************************************ 00:06:57.509 START TEST bdev_verify 00:06:57.509 ************************************ 00:06:57.509 23:27:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:57.769 [2024-09-28 23:27:45.715013] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:57.769 [2024-09-28 23:27:45.715130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62351 ] 00:06:57.769 [2024-09-28 23:27:45.865948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.029 [2024-09-28 23:27:46.071643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.029 [2024-09-28 23:27:46.071770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.596 Running I/O for 5 seconds... 00:07:03.706 23552.00 IOPS, 92.00 MiB/s 22688.00 IOPS, 88.62 MiB/s 22912.00 IOPS, 89.50 MiB/s 23376.00 IOPS, 91.31 MiB/s 23692.80 IOPS, 92.55 MiB/s 00:07:03.706 Latency(us) 00:07:03.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.706 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x0 length 0xbd0bd 00:07:03.706 Nvme0n1 : 5.04 1727.91 6.75 0.00 0.00 73850.50 15224.52 73400.32 00:07:03.706 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:03.706 Nvme0n1 : 5.05 1596.38 6.24 0.00 0.00 79847.96 17039.36 87919.06 00:07:03.706 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x0 length 0x4ff80 00:07:03.706 Nvme1n1p1 : 5.04 1727.48 6.75 0.00 0.00 73756.94 17442.66 66544.25 00:07:03.706 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:03.706 Nvme1n1p1 : 5.05 1595.84 6.23 0.00 0.00 79656.14 18955.03 75013.51 00:07:03.706 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x0 length 0x4ff7f 00:07:03.706 Nvme1n1p2 : 5.06 1733.03 6.77 0.00 0.00 73394.63 5873.03 61704.66 00:07:03.706 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:03.706 Nvme1n1p2 : 5.08 1601.21 6.25 0.00 0.00 79230.41 6503.19 68964.04 00:07:03.706 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x0 length 0x80000 00:07:03.706 Nvme2n1 : 5.06 1732.46 6.77 0.00 0.00 73315.93 6704.84 57268.38 00:07:03.706 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x80000 length 0x80000 00:07:03.706 Nvme2n1 : 5.09 1610.17 6.29 0.00 0.00 78774.42 9175.04 61301.37 00:07:03.706 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x0 length 0x80000 00:07:03.706 Nvme2n2 : 5.07 1741.61 6.80 0.00 0.00 72930.96 7914.73 55655.19 00:07:03.706 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x80000 length 0x80000 00:07:03.706 Nvme2n2 : 5.09 1609.74 6.29 0.00 0.00 78644.95 9427.10 64124.46 00:07:03.706 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x0 length 0x80000 00:07:03.706 Nvme2n3 : 5.07 1741.17 6.80 0.00 0.00 72804.69 7763.50 57671.68 00:07:03.706 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x80000 length 0x80000 00:07:03.706 Nvme2n3 : 5.09 1609.32 6.29 0.00 0.00 78527.05 9830.40 66140.95 00:07:03.706 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x0 length 0x20000 00:07:03.706 Nvme3n1 : 5.07 1740.76 6.80 0.00 0.00 72667.87 7965.14 59688.17 00:07:03.706 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.706 Verification LBA range: start 0x20000 length 0x20000 00:07:03.706 Nvme3n1 : 5.09 1608.89 6.28 0.00 0.00 78453.46 10183.29 68560.74 00:07:03.706 =================================================================================================================== 00:07:03.706 Total : 23375.95 91.31 0.00 0.00 76021.94 5873.03 87919.06 00:07:05.086 00:07:05.086 real 0m7.453s 00:07:05.086 user 0m13.818s 00:07:05.086 sys 0m0.222s 00:07:05.086 23:27:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.086 23:27:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:05.086 ************************************ 00:07:05.086 END TEST bdev_verify 00:07:05.086 ************************************ 00:07:05.086 23:27:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:05.086 23:27:53 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:05.086 23:27:53 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.086 23:27:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.086 ************************************ 00:07:05.086 START TEST bdev_verify_big_io 00:07:05.086 ************************************ 00:07:05.086 23:27:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:05.086 [2024-09-28 23:27:53.201126] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:05.086 [2024-09-28 23:27:53.201242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62449 ] 00:07:05.346 [2024-09-28 23:27:53.348092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.604 [2024-09-28 23:27:53.533462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.604 [2024-09-28 23:27:53.533528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.170 Running I/O for 5 seconds... 00:07:12.813 1235.00 IOPS, 77.19 MiB/s 2900.50 IOPS, 181.28 MiB/s 00:07:12.813 Latency(us) 00:07:12.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.813 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x0 length 0xbd0b 00:07:12.813 Nvme0n1 : 5.70 123.59 7.72 0.00 0.00 987848.12 17644.31 942105.21 00:07:12.813 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:12.813 Nvme0n1 : 5.88 68.07 4.25 0.00 0.00 1806456.61 18350.08 2090699.22 00:07:12.813 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x0 length 0x4ff8 00:07:12.813 Nvme1n1p1 : 5.79 132.98 8.31 0.00 0.00 903164.64 83482.78 903388.55 00:07:12.813 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:12.813 Nvme1n1p1 : 5.91 83.52 5.22 0.00 0.00 1387068.73 31053.98 1471232.79 00:07:12.813 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x0 length 0x4ff7 00:07:12.813 Nvme1n1p2 : 5.71 134.54 8.41 0.00 0.00 882066.25 116956.55 764653.88 00:07:12.813 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:12.813 Nvme1n1p2 : 5.92 86.53 5.41 0.00 0.00 1265599.80 36700.16 1348630.06 00:07:12.813 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x0 length 0x8000 00:07:12.813 Nvme2n1 : 5.83 136.78 8.55 0.00 0.00 845089.66 80659.69 1232480.10 00:07:12.813 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x8000 length 0x8000 00:07:12.813 Nvme2n1 : 6.07 106.18 6.64 0.00 0.00 994623.89 27424.30 1432516.14 00:07:12.813 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x0 length 0x8000 00:07:12.813 Nvme2n2 : 5.83 142.40 8.90 0.00 0.00 800181.56 34078.72 987274.63 00:07:12.813 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x8000 length 0x8000 00:07:12.813 Nvme2n2 : 6.23 140.10 8.76 0.00 0.00 727275.55 16837.71 1438968.91 00:07:12.813 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x0 length 0x8000 00:07:12.813 Nvme2n3 : 5.84 149.18 9.32 0.00 0.00 749414.55 4587.52 1000180.18 00:07:12.813 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x8000 length 0x8000 00:07:12.813 Nvme2n3 : 6.41 194.82 12.18 0.00 0.00 505739.34 8469.27 1458327.24 00:07:12.813 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x0 length 0x2000 00:07:12.813 Nvme3n1 : 5.84 153.85 9.62 0.00 0.00 709117.39 3276.80 1013085.74 00:07:12.813 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:12.813 Verification LBA range: start 0x2000 length 0x2000 00:07:12.813 Nvme3n1 : 6.59 299.53 18.72 0.00 0.00 317264.27 683.72 1619646.62 00:07:12.813 =================================================================================================================== 00:07:12.813 Total : 1952.06 122.00 0.00 0.00 787959.30 683.72 2090699.22 00:07:14.718 00:07:14.718 real 0m9.623s 00:07:14.718 user 0m18.048s 00:07:14.718 sys 0m0.289s 00:07:14.718 23:28:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.718 23:28:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:14.718 ************************************ 00:07:14.718 END TEST bdev_verify_big_io 00:07:14.718 ************************************ 00:07:14.718 23:28:02 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:14.718 23:28:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:14.718 23:28:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.718 23:28:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.719 ************************************ 00:07:14.719 START TEST bdev_write_zeroes 00:07:14.719 ************************************ 00:07:14.719 23:28:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:14.719 [2024-09-28 23:28:02.872977] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:14.719 [2024-09-28 23:28:02.873107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62570 ] 00:07:14.980 [2024-09-28 23:28:03.022396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.240 [2024-09-28 23:28:03.230842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.809 Running I/O for 1 seconds... 00:07:16.747 63616.00 IOPS, 248.50 MiB/s 00:07:16.747 Latency(us) 00:07:16.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.747 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.747 Nvme0n1 : 1.02 9081.85 35.48 0.00 0.00 14058.04 7208.96 25407.80 00:07:16.747 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.747 Nvme1n1p1 : 1.02 9070.66 35.43 0.00 0.00 14046.45 11191.53 25004.50 00:07:16.747 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.747 Nvme1n1p2 : 1.02 9059.50 35.39 0.00 0.00 14031.76 11342.77 24298.73 00:07:16.747 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.747 Nvme2n1 : 1.03 9049.12 35.35 0.00 0.00 14013.40 10788.23 23492.14 00:07:16.747 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.747 Nvme2n2 : 1.03 9038.94 35.31 0.00 0.00 14003.03 10334.52 22988.01 00:07:16.747 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.747 Nvme2n3 : 1.03 9028.72 35.27 0.00 0.00 13993.31 9981.64 23794.61 00:07:16.747 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.747 Nvme3n1 : 1.03 9018.37 35.23 0.00 0.00 13982.06 9981.64 25710.28 00:07:16.747 =================================================================================================================== 00:07:16.747 Total : 63347.16 247.45 0.00 0.00 14018.29 7208.96 25710.28 00:07:17.689 00:07:17.689 real 0m2.920s 00:07:17.689 user 0m2.591s 00:07:17.689 sys 0m0.212s 00:07:17.689 23:28:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.689 23:28:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:17.689 ************************************ 00:07:17.689 END TEST bdev_write_zeroes 00:07:17.689 ************************************ 00:07:17.689 23:28:05 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:17.689 23:28:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:17.689 23:28:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.689 23:28:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:17.689 ************************************ 00:07:17.689 START TEST bdev_json_nonenclosed 00:07:17.689 ************************************ 00:07:17.689 23:28:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:17.689 [2024-09-28 23:28:05.835606] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:17.689 [2024-09-28 23:28:05.835720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62623 ] 00:07:17.950 [2024-09-28 23:28:05.982559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.211 [2024-09-28 23:28:06.158922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.211 [2024-09-28 23:28:06.158998] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:18.211 [2024-09-28 23:28:06.159016] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:18.211 [2024-09-28 23:28:06.159025] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.473 00:07:18.473 real 0m0.675s 00:07:18.473 user 0m0.460s 00:07:18.473 sys 0m0.110s 00:07:18.473 23:28:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.473 23:28:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:18.473 ************************************ 00:07:18.473 END TEST bdev_json_nonenclosed 00:07:18.473 ************************************ 00:07:18.473 23:28:06 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.473 23:28:06 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:18.473 23:28:06 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.473 23:28:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.473 ************************************ 00:07:18.473 START TEST bdev_json_nonarray 00:07:18.473 ************************************ 00:07:18.473 23:28:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.473 [2024-09-28 23:28:06.551199] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:18.473 [2024-09-28 23:28:06.551314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62654 ] 00:07:18.734 [2024-09-28 23:28:06.692085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.734 [2024-09-28 23:28:06.866928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.734 [2024-09-28 23:28:06.867014] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:18.734 [2024-09-28 23:28:06.867032] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:18.734 [2024-09-28 23:28:06.867040] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.995 00:07:18.995 real 0m0.660s 00:07:18.995 user 0m0.474s 00:07:18.995 sys 0m0.081s 00:07:18.995 23:28:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.995 23:28:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:18.995 ************************************ 00:07:18.995 END TEST bdev_json_nonarray 00:07:18.995 ************************************ 00:07:19.257 23:28:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:19.257 23:28:07 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:19.257 23:28:07 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:19.257 23:28:07 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.257 23:28:07 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.257 23:28:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.257 ************************************ 00:07:19.257 START TEST bdev_gpt_uuid 00:07:19.257 ************************************ 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62685 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62685 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62685 ']' 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.257 23:28:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:19.257 [2024-09-28 23:28:07.265962] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:19.257 [2024-09-28 23:28:07.266079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62685 ] 00:07:19.257 [2024-09-28 23:28:07.414097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.519 [2024-09-28 23:28:07.588796] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.092 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.092 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:20.092 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:20.092 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.092 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.667 Some configs were skipped because the RPC state that can call them passed over. 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:20.667 { 00:07:20.667 "name": "Nvme1n1p1", 00:07:20.667 "aliases": [ 00:07:20.667 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:20.667 ], 00:07:20.667 "product_name": "GPT Disk", 00:07:20.667 "block_size": 4096, 00:07:20.667 "num_blocks": 655104, 00:07:20.667 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:20.667 "assigned_rate_limits": { 00:07:20.667 "rw_ios_per_sec": 0, 00:07:20.667 "rw_mbytes_per_sec": 0, 00:07:20.667 "r_mbytes_per_sec": 0, 00:07:20.667 "w_mbytes_per_sec": 0 00:07:20.667 }, 00:07:20.667 "claimed": false, 00:07:20.667 "zoned": false, 00:07:20.667 "supported_io_types": { 00:07:20.667 "read": true, 00:07:20.667 "write": true, 00:07:20.667 "unmap": true, 00:07:20.667 "flush": true, 00:07:20.667 "reset": true, 00:07:20.667 "nvme_admin": false, 00:07:20.667 "nvme_io": false, 00:07:20.667 "nvme_io_md": false, 00:07:20.667 "write_zeroes": true, 00:07:20.667 "zcopy": false, 00:07:20.667 "get_zone_info": false, 00:07:20.667 "zone_management": false, 00:07:20.667 "zone_append": false, 00:07:20.667 "compare": true, 00:07:20.667 "compare_and_write": false, 00:07:20.667 "abort": true, 00:07:20.667 "seek_hole": false, 00:07:20.667 "seek_data": false, 00:07:20.667 "copy": true, 00:07:20.667 "nvme_iov_md": false 00:07:20.667 }, 00:07:20.667 "driver_specific": { 00:07:20.667 "gpt": { 00:07:20.667 "base_bdev": "Nvme1n1", 00:07:20.667 "offset_blocks": 256, 00:07:20.667 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:20.667 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:20.667 "partition_name": "SPDK_TEST_first" 00:07:20.667 } 00:07:20.667 } 00:07:20.667 } 00:07:20.667 ]' 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:20.667 { 00:07:20.667 "name": "Nvme1n1p2", 00:07:20.667 "aliases": [ 00:07:20.667 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:20.667 ], 00:07:20.667 "product_name": "GPT Disk", 00:07:20.667 "block_size": 4096, 00:07:20.667 "num_blocks": 655103, 00:07:20.667 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:20.667 "assigned_rate_limits": { 00:07:20.667 "rw_ios_per_sec": 0, 00:07:20.667 "rw_mbytes_per_sec": 0, 00:07:20.667 "r_mbytes_per_sec": 0, 00:07:20.667 "w_mbytes_per_sec": 0 00:07:20.667 }, 00:07:20.667 "claimed": false, 00:07:20.667 "zoned": false, 00:07:20.667 "supported_io_types": { 00:07:20.667 "read": true, 00:07:20.667 "write": true, 00:07:20.667 "unmap": true, 00:07:20.667 "flush": true, 00:07:20.667 "reset": true, 00:07:20.667 "nvme_admin": false, 00:07:20.667 "nvme_io": false, 00:07:20.667 "nvme_io_md": false, 00:07:20.667 "write_zeroes": true, 00:07:20.667 "zcopy": false, 00:07:20.667 "get_zone_info": false, 00:07:20.667 "zone_management": false, 00:07:20.667 "zone_append": false, 00:07:20.667 "compare": true, 00:07:20.667 "compare_and_write": false, 00:07:20.667 "abort": true, 00:07:20.667 "seek_hole": false, 00:07:20.667 "seek_data": false, 00:07:20.667 "copy": true, 00:07:20.667 "nvme_iov_md": false 00:07:20.667 }, 00:07:20.667 "driver_specific": { 00:07:20.667 "gpt": { 00:07:20.667 "base_bdev": "Nvme1n1", 00:07:20.667 "offset_blocks": 655360, 00:07:20.667 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:20.667 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:20.667 "partition_name": "SPDK_TEST_second" 00:07:20.667 } 00:07:20.667 } 00:07:20.667 } 00:07:20.667 ]' 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62685 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62685 ']' 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62685 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62685 00:07:20.667 killing process with pid 62685 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62685' 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62685 00:07:20.667 23:28:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62685 00:07:22.577 00:07:22.577 real 0m3.197s 00:07:22.577 user 0m3.332s 00:07:22.577 sys 0m0.384s 00:07:22.577 23:28:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.577 23:28:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:22.577 ************************************ 00:07:22.577 END TEST bdev_gpt_uuid 00:07:22.577 ************************************ 00:07:22.577 23:28:10 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:22.577 23:28:10 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:22.577 23:28:10 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:22.577 23:28:10 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:22.577 23:28:10 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:22.577 23:28:10 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:22.577 23:28:10 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:22.577 23:28:10 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:22.577 23:28:10 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:22.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:22.835 Waiting for block devices as requested 00:07:22.835 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.835 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.835 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:23.094 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.372 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:28.372 23:28:16 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:28.372 23:28:16 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:28.372 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:28.372 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:28.372 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:28.372 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:28.372 23:28:16 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:28.372 00:07:28.372 real 0m59.806s 00:07:28.372 user 1m16.129s 00:07:28.372 sys 0m8.618s 00:07:28.372 23:28:16 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.372 ************************************ 00:07:28.372 END TEST blockdev_nvme_gpt 00:07:28.372 ************************************ 00:07:28.372 23:28:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.372 23:28:16 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:28.372 23:28:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.372 23:28:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.372 23:28:16 -- common/autotest_common.sh@10 -- # set +x 00:07:28.372 ************************************ 00:07:28.372 START TEST nvme 00:07:28.372 ************************************ 00:07:28.372 23:28:16 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:28.632 * Looking for test storage... 00:07:28.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:28.632 23:28:16 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:28.632 23:28:16 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:28.632 23:28:16 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:28.632 23:28:16 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:28.632 23:28:16 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.632 23:28:16 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.632 23:28:16 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.632 23:28:16 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.632 23:28:16 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.632 23:28:16 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.632 23:28:16 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.632 23:28:16 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.632 23:28:16 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.632 23:28:16 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.632 23:28:16 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.632 23:28:16 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:28.632 23:28:16 nvme -- scripts/common.sh@345 -- # : 1 00:07:28.632 23:28:16 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.632 23:28:16 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.632 23:28:16 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:28.632 23:28:16 nvme -- scripts/common.sh@353 -- # local d=1 00:07:28.632 23:28:16 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.632 23:28:16 nvme -- scripts/common.sh@355 -- # echo 1 00:07:28.632 23:28:16 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.632 23:28:16 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:28.632 23:28:16 nvme -- scripts/common.sh@353 -- # local d=2 00:07:28.632 23:28:16 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.632 23:28:16 nvme -- scripts/common.sh@355 -- # echo 2 00:07:28.632 23:28:16 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.632 23:28:16 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.632 23:28:16 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.632 23:28:16 nvme -- scripts/common.sh@368 -- # return 0 00:07:28.632 23:28:16 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.632 23:28:16 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.632 --rc genhtml_branch_coverage=1 00:07:28.632 --rc genhtml_function_coverage=1 00:07:28.632 --rc genhtml_legend=1 00:07:28.632 --rc geninfo_all_blocks=1 00:07:28.632 --rc geninfo_unexecuted_blocks=1 00:07:28.632 00:07:28.632 ' 00:07:28.632 23:28:16 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.632 --rc genhtml_branch_coverage=1 00:07:28.632 --rc genhtml_function_coverage=1 00:07:28.632 --rc genhtml_legend=1 00:07:28.632 --rc geninfo_all_blocks=1 00:07:28.632 --rc geninfo_unexecuted_blocks=1 00:07:28.632 00:07:28.632 ' 00:07:28.632 23:28:16 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.632 --rc genhtml_branch_coverage=1 00:07:28.632 --rc genhtml_function_coverage=1 00:07:28.632 --rc genhtml_legend=1 00:07:28.632 --rc geninfo_all_blocks=1 00:07:28.632 --rc geninfo_unexecuted_blocks=1 00:07:28.632 00:07:28.632 ' 00:07:28.632 23:28:16 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.632 --rc genhtml_branch_coverage=1 00:07:28.632 --rc genhtml_function_coverage=1 00:07:28.632 --rc genhtml_legend=1 00:07:28.632 --rc geninfo_all_blocks=1 00:07:28.632 --rc geninfo_unexecuted_blocks=1 00:07:28.632 00:07:28.632 ' 00:07:28.632 23:28:16 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:29.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:29.763 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.763 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.763 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.763 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.763 23:28:17 nvme -- nvme/nvme.sh@79 -- # uname 00:07:29.763 23:28:17 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:29.763 23:28:17 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:29.763 23:28:17 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:29.763 23:28:17 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:29.763 23:28:17 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:29.763 23:28:17 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:29.763 23:28:17 nvme -- common/autotest_common.sh@1071 -- # stubpid=63321 00:07:29.763 23:28:17 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:29.763 Waiting for stub to ready for secondary processes... 00:07:29.763 23:28:17 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:29.763 23:28:17 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:29.763 23:28:17 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63321 ]] 00:07:29.763 23:28:17 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:29.763 [2024-09-28 23:28:17.806332] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:29.763 [2024-09-28 23:28:17.806450] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:30.697 [2024-09-28 23:28:18.574542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.697 [2024-09-28 23:28:18.750245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.697 [2024-09-28 23:28:18.750556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.697 [2024-09-28 23:28:18.750582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.697 [2024-09-28 23:28:18.764268] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:30.697 [2024-09-28 23:28:18.764307] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.697 [2024-09-28 23:28:18.773007] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:30.697 [2024-09-28 23:28:18.773093] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:30.697 [2024-09-28 23:28:18.774607] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.697 [2024-09-28 23:28:18.774738] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:30.697 [2024-09-28 23:28:18.774785] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:30.697 [2024-09-28 23:28:18.776491] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.697 [2024-09-28 23:28:18.776656] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:30.697 [2024-09-28 23:28:18.776710] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:30.697 [2024-09-28 23:28:18.778989] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.697 [2024-09-28 23:28:18.779163] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:30.697 [2024-09-28 23:28:18.779223] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:30.697 [2024-09-28 23:28:18.779262] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:30.697 [2024-09-28 23:28:18.779294] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:30.697 23:28:18 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:30.697 done. 00:07:30.697 23:28:18 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:30.697 23:28:18 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:30.697 23:28:18 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:30.697 23:28:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.697 23:28:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.697 ************************************ 00:07:30.697 START TEST nvme_reset 00:07:30.697 ************************************ 00:07:30.697 23:28:18 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:30.955 Initializing NVMe Controllers 00:07:30.955 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:30.955 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:30.955 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:30.955 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:30.956 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:30.956 00:07:30.956 real 0m0.213s 00:07:30.956 user 0m0.062s 00:07:30.956 sys 0m0.105s 00:07:30.956 23:28:19 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.956 ************************************ 00:07:30.956 END TEST nvme_reset 00:07:30.956 ************************************ 00:07:30.956 23:28:19 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:30.956 23:28:19 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:30.956 23:28:19 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.956 23:28:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.956 23:28:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.956 ************************************ 00:07:30.956 START TEST nvme_identify 00:07:30.956 ************************************ 00:07:30.956 23:28:19 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:30.956 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:30.956 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:30.956 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:30.956 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:30.956 23:28:19 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:30.956 23:28:19 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:30.956 23:28:19 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:30.956 23:28:19 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:30.956 23:28:19 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:30.956 23:28:19 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:30.956 23:28:19 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:30.956 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:31.220 [2024-09-28 23:28:19.279452] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 63342 terminated unexpected 00:07:31.220 ===================================================== 00:07:31.220 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:31.220 ===================================================== 00:07:31.220 Controller Capabilities/Features 00:07:31.220 ================================ 00:07:31.220 Vendor ID: 1b36 00:07:31.220 Subsystem Vendor ID: 1af4 00:07:31.220 Serial Number: 12340 00:07:31.220 Model Number: QEMU NVMe Ctrl 00:07:31.220 Firmware Version: 8.0.0 00:07:31.220 Recommended Arb Burst: 6 00:07:31.220 IEEE OUI Identifier: 00 54 52 00:07:31.220 Multi-path I/O 00:07:31.220 May have multiple subsystem ports: No 00:07:31.220 May have multiple controllers: No 00:07:31.220 Associated with SR-IOV VF: No 00:07:31.220 Max Data Transfer Size: 524288 00:07:31.220 Max Number of Namespaces: 256 00:07:31.220 Max Number of I/O Queues: 64 00:07:31.220 NVMe Specification Version (VS): 1.4 00:07:31.220 NVMe Specification Version (Identify): 1.4 00:07:31.220 Maximum Queue Entries: 2048 00:07:31.220 Contiguous Queues Required: Yes 00:07:31.220 Arbitration Mechanisms Supported 00:07:31.220 Weighted Round Robin: Not Supported 00:07:31.220 Vendor Specific: Not Supported 00:07:31.220 Reset Timeout: 7500 ms 00:07:31.220 Doorbell Stride: 4 bytes 00:07:31.220 NVM Subsystem Reset: Not Supported 00:07:31.220 Command Sets Supported 00:07:31.220 NVM Command Set: Supported 00:07:31.220 Boot Partition: Not Supported 00:07:31.221 Memory Page Size Minimum: 4096 bytes 00:07:31.221 Memory Page Size Maximum: 65536 bytes 00:07:31.221 Persistent Memory Region: Not Supported 00:07:31.221 Optional Asynchronous Events Supported 00:07:31.221 Namespace Attribute Notices: Supported 00:07:31.221 Firmware Activation Notices: Not Supported 00:07:31.221 ANA Change Notices: Not Supported 00:07:31.221 PLE Aggregate Log Change Notices: Not Supported 00:07:31.221 LBA Status Info Alert Notices: Not Supported 00:07:31.221 EGE Aggregate Log Change Notices: Not Supported 00:07:31.221 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.221 Zone Descriptor Change Notices: Not Supported 00:07:31.221 Discovery Log Change Notices: Not Supported 00:07:31.221 Controller Attributes 00:07:31.221 128-bit Host Identifier: Not Supported 00:07:31.221 Non-Operational Permissive Mode: Not Supported 00:07:31.221 NVM Sets: Not Supported 00:07:31.221 Read Recovery Levels: Not Supported 00:07:31.221 Endurance Groups: Not Supported 00:07:31.221 Predictable Latency Mode: Not Supported 00:07:31.221 Traffic Based Keep ALive: Not Supported 00:07:31.221 Namespace Granularity: Not Supported 00:07:31.221 SQ Associations: Not Supported 00:07:31.221 UUID List: Not Supported 00:07:31.221 Multi-Domain Subsystem: Not Supported 00:07:31.221 Fixed Capacity Management: Not Supported 00:07:31.221 Variable Capacity Management: Not Supported 00:07:31.221 Delete Endurance Group: Not Supported 00:07:31.221 Delete NVM Set: Not Supported 00:07:31.221 Extended LBA Formats Supported: Supported 00:07:31.221 Flexible Data Placement Supported: Not Supported 00:07:31.221 00:07:31.221 Controller Memory Buffer Support 00:07:31.221 ================================ 00:07:31.221 Supported: No 00:07:31.221 00:07:31.221 Persistent Memory Region Support 00:07:31.221 ================================ 00:07:31.221 Supported: No 00:07:31.221 00:07:31.221 Admin Command Set Attributes 00:07:31.221 ============================ 00:07:31.221 Security Send/Receive: Not Supported 00:07:31.221 Format NVM: Supported 00:07:31.221 Firmware Activate/Download: Not Supported 00:07:31.221 Namespace Management: Supported 00:07:31.221 Device Self-Test: Not Supported 00:07:31.221 Directives: Supported 00:07:31.221 NVMe-MI: Not Supported 00:07:31.221 Virtualization Management: Not Supported 00:07:31.221 Doorbell Buffer Config: Supported 00:07:31.221 Get LBA Status Capability: Not Supported 00:07:31.221 Command & Feature Lockdown Capability: Not Supported 00:07:31.221 Abort Command Limit: 4 00:07:31.221 Async Event Request Limit: 4 00:07:31.221 Number of Firmware Slots: N/A 00:07:31.221 Firmware Slot 1 Read-Only: N/A 00:07:31.221 Firmware Activation Without Reset: N/A 00:07:31.221 Multiple Update Detection Support: N/A 00:07:31.221 Firmware Update Granularity: No Information Provided 00:07:31.221 Per-Namespace SMART Log: Yes 00:07:31.221 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.221 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:31.221 Command Effects Log Page: Supported 00:07:31.221 Get Log Page Extended Data: Supported 00:07:31.221 Telemetry Log Pages: Not Supported 00:07:31.221 Persistent Event Log Pages: Not Supported 00:07:31.221 Supported Log Pages Log Page: May Support 00:07:31.221 Commands Supported & Effects Log Page: Not Supported 00:07:31.221 Feature Identifiers & Effects Log Page:May Support 00:07:31.221 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.221 Data Area 4 for Telemetry Log: Not Supported 00:07:31.221 Error Log Page Entries Supported: 1 00:07:31.221 Keep Alive: Not Supported 00:07:31.221 00:07:31.221 NVM Command Set Attributes 00:07:31.221 ========================== 00:07:31.221 Submission Queue Entry Size 00:07:31.221 Max: 64 00:07:31.221 Min: 64 00:07:31.221 Completion Queue Entry Size 00:07:31.221 Max: 16 00:07:31.221 Min: 16 00:07:31.221 Number of Namespaces: 256 00:07:31.221 Compare Command: Supported 00:07:31.221 Write Uncorrectable Command: Not Supported 00:07:31.221 Dataset Management Command: Supported 00:07:31.221 Write Zeroes Command: Supported 00:07:31.221 Set Features Save Field: Supported 00:07:31.221 Reservations: Not Supported 00:07:31.221 Timestamp: Supported 00:07:31.221 Copy: Supported 00:07:31.221 Volatile Write Cache: Present 00:07:31.221 Atomic Write Unit (Normal): 1 00:07:31.221 Atomic Write Unit (PFail): 1 00:07:31.221 Atomic Compare & Write Unit: 1 00:07:31.221 Fused Compare & Write: Not Supported 00:07:31.221 Scatter-Gather List 00:07:31.221 SGL Command Set: Supported 00:07:31.221 SGL Keyed: Not Supported 00:07:31.221 SGL Bit Bucket Descriptor: Not Supported 00:07:31.221 SGL Metadata Pointer: Not Supported 00:07:31.221 Oversized SGL: Not Supported 00:07:31.221 SGL Metadata Address: Not Supported 00:07:31.221 SGL Offset: Not Supported 00:07:31.221 Transport SGL Data Block: Not Supported 00:07:31.221 Replay Protected Memory Block: Not Supported 00:07:31.221 00:07:31.221 Firmware Slot Information 00:07:31.221 ========================= 00:07:31.221 Active slot: 1 00:07:31.221 Slot 1 Firmware Revision: 1.0 00:07:31.221 00:07:31.221 00:07:31.221 Commands Supported and Effects 00:07:31.221 ============================== 00:07:31.221 Admin Commands 00:07:31.221 -------------- 00:07:31.221 Delete I/O Submission Queue (00h): Supported 00:07:31.221 Create I/O Submission Queue (01h): Supported 00:07:31.221 Get Log Page (02h): Supported 00:07:31.221 Delete I/O Completion Queue (04h): Supported 00:07:31.221 Create I/O Completion Queue (05h): Supported 00:07:31.221 Identify (06h): Supported 00:07:31.221 Abort (08h): Supported 00:07:31.221 Set Features (09h): Supported 00:07:31.221 Get Features (0Ah): Supported 00:07:31.221 Asynchronous Event Request (0Ch): Supported 00:07:31.221 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.221 Directive Send (19h): Supported 00:07:31.221 Directive Receive (1Ah): Supported 00:07:31.221 Virtualization Management (1Ch): Supported 00:07:31.221 Doorbell Buffer Config (7Ch): Supported 00:07:31.221 Format NVM (80h): Supported LBA-Change 00:07:31.221 I/O Commands 00:07:31.221 ------------ 00:07:31.221 Flush (00h): Supported LBA-Change 00:07:31.221 Write (01h): Supported LBA-Change 00:07:31.221 Read (02h): Supported 00:07:31.221 Compare (05h): Supported 00:07:31.221 Write Zeroes (08h): Supported LBA-Change 00:07:31.221 Dataset Management (09h): Supported LBA-Change 00:07:31.221 Unknown (0Ch): Supported 00:07:31.221 Unknown (12h): Supported 00:07:31.221 Copy (19h): Supported LBA-Change 00:07:31.221 Unknown (1Dh): Supported LBA-Change 00:07:31.221 00:07:31.221 Error Log 00:07:31.221 ========= 00:07:31.221 00:07:31.221 Arbitration 00:07:31.221 =========== 00:07:31.221 Arbitration Burst: no limit 00:07:31.221 00:07:31.221 Power Management 00:07:31.221 ================ 00:07:31.221 Number of Power States: 1 00:07:31.221 Current Power State: Power State #0 00:07:31.221 Power State #0: 00:07:31.221 Max Power: 25.00 W 00:07:31.221 Non-Operational State: Operational 00:07:31.221 Entry Latency: 16 microseconds 00:07:31.221 Exit Latency: 4 microseconds 00:07:31.221 Relative Read Throughput: 0 00:07:31.221 Relative Read Latency: 0 00:07:31.221 Relative Write Throughput: 0 00:07:31.221 Relative Write Latency: 0 00:07:31.221 Idle Power[2024-09-28 23:28:19.280887] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 63342 terminated unexpected 00:07:31.221 : Not Reported 00:07:31.221 Active Power: Not Reported 00:07:31.221 Non-Operational Permissive Mode: Not Supported 00:07:31.221 00:07:31.221 Health Information 00:07:31.221 ================== 00:07:31.222 Critical Warnings: 00:07:31.222 Available Spare Space: OK 00:07:31.222 Temperature: OK 00:07:31.222 Device Reliability: OK 00:07:31.222 Read Only: No 00:07:31.222 Volatile Memory Backup: OK 00:07:31.222 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.222 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.222 Available Spare: 0% 00:07:31.222 Available Spare Threshold: 0% 00:07:31.222 Life Percentage Used: 0% 00:07:31.222 Data Units Read: 665 00:07:31.222 Data Units Written: 593 00:07:31.222 Host Read Commands: 38457 00:07:31.222 Host Write Commands: 38243 00:07:31.222 Controller Busy Time: 0 minutes 00:07:31.222 Power Cycles: 0 00:07:31.222 Power On Hours: 0 hours 00:07:31.222 Unsafe Shutdowns: 0 00:07:31.222 Unrecoverable Media Errors: 0 00:07:31.222 Lifetime Error Log Entries: 0 00:07:31.222 Warning Temperature Time: 0 minutes 00:07:31.222 Critical Temperature Time: 0 minutes 00:07:31.222 00:07:31.222 Number of Queues 00:07:31.222 ================ 00:07:31.222 Number of I/O Submission Queues: 64 00:07:31.222 Number of I/O Completion Queues: 64 00:07:31.222 00:07:31.222 ZNS Specific Controller Data 00:07:31.222 ============================ 00:07:31.222 Zone Append Size Limit: 0 00:07:31.222 00:07:31.222 00:07:31.222 Active Namespaces 00:07:31.222 ================= 00:07:31.222 Namespace ID:1 00:07:31.222 Error Recovery Timeout: Unlimited 00:07:31.222 Command Set Identifier: NVM (00h) 00:07:31.222 Deallocate: Supported 00:07:31.222 Deallocated/Unwritten Error: Supported 00:07:31.222 Deallocated Read Value: All 0x00 00:07:31.222 Deallocate in Write Zeroes: Not Supported 00:07:31.222 Deallocated Guard Field: 0xFFFF 00:07:31.222 Flush: Supported 00:07:31.222 Reservation: Not Supported 00:07:31.222 Metadata Transferred as: Separate Metadata Buffer 00:07:31.222 Namespace Sharing Capabilities: Private 00:07:31.222 Size (in LBAs): 1548666 (5GiB) 00:07:31.222 Capacity (in LBAs): 1548666 (5GiB) 00:07:31.222 Utilization (in LBAs): 1548666 (5GiB) 00:07:31.222 Thin Provisioning: Not Supported 00:07:31.222 Per-NS Atomic Units: No 00:07:31.222 Maximum Single Source Range Length: 128 00:07:31.222 Maximum Copy Length: 128 00:07:31.222 Maximum Source Range Count: 128 00:07:31.222 NGUID/EUI64 Never Reused: No 00:07:31.222 Namespace Write Protected: No 00:07:31.222 Number of LBA Formats: 8 00:07:31.222 Current LBA Format: LBA Format #07 00:07:31.222 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.222 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.222 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.222 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.222 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.222 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.222 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.222 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.222 00:07:31.222 NVM Specific Namespace Data 00:07:31.222 =========================== 00:07:31.222 Logical Block Storage Tag Mask: 0 00:07:31.222 Protection Information Capabilities: 00:07:31.222 16b Guard Protection Information Storage Tag Support: No 00:07:31.222 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.222 Storage Tag Check Read Support: No 00:07:31.222 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.222 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.222 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.222 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.222 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.222 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.222 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.222 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.222 ===================================================== 00:07:31.222 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:31.222 ===================================================== 00:07:31.222 Controller Capabilities/Features 00:07:31.222 ================================ 00:07:31.222 Vendor ID: 1b36 00:07:31.222 Subsystem Vendor ID: 1af4 00:07:31.222 Serial Number: 12341 00:07:31.222 Model Number: QEMU NVMe Ctrl 00:07:31.222 Firmware Version: 8.0.0 00:07:31.222 Recommended Arb Burst: 6 00:07:31.222 IEEE OUI Identifier: 00 54 52 00:07:31.222 Multi-path I/O 00:07:31.222 May have multiple subsystem ports: No 00:07:31.222 May have multiple controllers: No 00:07:31.222 Associated with SR-IOV VF: No 00:07:31.222 Max Data Transfer Size: 524288 00:07:31.222 Max Number of Namespaces: 256 00:07:31.222 Max Number of I/O Queues: 64 00:07:31.222 NVMe Specification Version (VS): 1.4 00:07:31.222 NVMe Specification Version (Identify): 1.4 00:07:31.222 Maximum Queue Entries: 2048 00:07:31.222 Contiguous Queues Required: Yes 00:07:31.222 Arbitration Mechanisms Supported 00:07:31.222 Weighted Round Robin: Not Supported 00:07:31.222 Vendor Specific: Not Supported 00:07:31.222 Reset Timeout: 7500 ms 00:07:31.222 Doorbell Stride: 4 bytes 00:07:31.222 NVM Subsystem Reset: Not Supported 00:07:31.222 Command Sets Supported 00:07:31.222 NVM Command Set: Supported 00:07:31.222 Boot Partition: Not Supported 00:07:31.222 Memory Page Size Minimum: 4096 bytes 00:07:31.222 Memory Page Size Maximum: 65536 bytes 00:07:31.222 Persistent Memory Region: Not Supported 00:07:31.222 Optional Asynchronous Events Supported 00:07:31.222 Namespace Attribute Notices: Supported 00:07:31.222 Firmware Activation Notices: Not Supported 00:07:31.222 ANA Change Notices: Not Supported 00:07:31.222 PLE Aggregate Log Change Notices: Not Supported 00:07:31.222 LBA Status Info Alert Notices: Not Supported 00:07:31.222 EGE Aggregate Log Change Notices: Not Supported 00:07:31.222 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.222 Zone Descriptor Change Notices: Not Supported 00:07:31.222 Discovery Log Change Notices: Not Supported 00:07:31.222 Controller Attributes 00:07:31.222 128-bit Host Identifier: Not Supported 00:07:31.222 Non-Operational Permissive Mode: Not Supported 00:07:31.222 NVM Sets: Not Supported 00:07:31.222 Read Recovery Levels: Not Supported 00:07:31.222 Endurance Groups: Not Supported 00:07:31.222 Predictable Latency Mode: Not Supported 00:07:31.222 Traffic Based Keep ALive: Not Supported 00:07:31.222 Namespace Granularity: Not Supported 00:07:31.222 SQ Associations: Not Supported 00:07:31.222 UUID List: Not Supported 00:07:31.222 Multi-Domain Subsystem: Not Supported 00:07:31.222 Fixed Capacity Management: Not Supported 00:07:31.222 Variable Capacity Management: Not Supported 00:07:31.222 Delete Endurance Group: Not Supported 00:07:31.222 Delete NVM Set: Not Supported 00:07:31.222 Extended LBA Formats Supported: Supported 00:07:31.222 Flexible Data Placement Supported: Not Supported 00:07:31.222 00:07:31.222 Controller Memory Buffer Support 00:07:31.222 ================================ 00:07:31.222 Supported: No 00:07:31.222 00:07:31.222 Persistent Memory Region Support 00:07:31.222 ================================ 00:07:31.222 Supported: No 00:07:31.222 00:07:31.222 Admin Command Set Attributes 00:07:31.222 ============================ 00:07:31.222 Security Send/Receive: Not Supported 00:07:31.222 Format NVM: Supported 00:07:31.222 Firmware Activate/Download: Not Supported 00:07:31.222 Namespace Management: Supported 00:07:31.223 Device Self-Test: Not Supported 00:07:31.223 Directives: Supported 00:07:31.223 NVMe-MI: Not Supported 00:07:31.223 Virtualization Management: Not Supported 00:07:31.223 Doorbell Buffer Config: Supported 00:07:31.223 Get LBA Status Capability: Not Supported 00:07:31.223 Command & Feature Lockdown Capability: Not Supported 00:07:31.223 Abort Command Limit: 4 00:07:31.223 Async Event Request Limit: 4 00:07:31.223 Number of Firmware Slots: N/A 00:07:31.223 Firmware Slot 1 Read-Only: N/A 00:07:31.223 Firmware Activation Without Reset: N/A 00:07:31.223 Multiple Update Detection Support: N/A 00:07:31.223 Firmware Update Granularity: No Information Provided 00:07:31.223 Per-Namespace SMART Log: Yes 00:07:31.223 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.223 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:31.223 Command Effects Log Page: Supported 00:07:31.223 Get Log Page Extended Data: Supported 00:07:31.223 Telemetry Log Pages: Not Supported 00:07:31.223 Persistent Event Log Pages: Not Supported 00:07:31.223 Supported Log Pages Log Page: May Support 00:07:31.223 Commands Supported & Effects Log Page: Not Supported 00:07:31.223 Feature Identifiers & Effects Log Page:May Support 00:07:31.223 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.223 Data Area 4 for Telemetry Log: Not Supported 00:07:31.223 Error Log Page Entries Supported: 1 00:07:31.223 Keep Alive: Not Supported 00:07:31.223 00:07:31.223 NVM Command Set Attributes 00:07:31.223 ========================== 00:07:31.223 Submission Queue Entry Size 00:07:31.223 Max: 64 00:07:31.223 Min: 64 00:07:31.223 Completion Queue Entry Size 00:07:31.223 Max: 16 00:07:31.223 Min: 16 00:07:31.223 Number of Namespaces: 256 00:07:31.223 Compare Command: Supported 00:07:31.223 Write Uncorrectable Command: Not Supported 00:07:31.223 Dataset Management Command: Supported 00:07:31.223 Write Zeroes Command: Supported 00:07:31.223 Set Features Save Field: Supported 00:07:31.223 Reservations: Not Supported 00:07:31.223 Timestamp: Supported 00:07:31.223 Copy: Supported 00:07:31.223 Volatile Write Cache: Present 00:07:31.223 Atomic Write Unit (Normal): 1 00:07:31.223 Atomic Write Unit (PFail): 1 00:07:31.223 Atomic Compare & Write Unit: 1 00:07:31.223 Fused Compare & Write: Not Supported 00:07:31.223 Scatter-Gather List 00:07:31.223 SGL Command Set: Supported 00:07:31.223 SGL Keyed: Not Supported 00:07:31.223 SGL Bit Bucket Descriptor: Not Supported 00:07:31.223 SGL Metadata Pointer: Not Supported 00:07:31.223 Oversized SGL: Not Supported 00:07:31.223 SGL Metadata Address: Not Supported 00:07:31.223 SGL Offset: Not Supported 00:07:31.223 Transport SGL Data Block: Not Supported 00:07:31.223 Replay Protected Memory Block: Not Supported 00:07:31.223 00:07:31.223 Firmware Slot Information 00:07:31.223 ========================= 00:07:31.223 Active slot: 1 00:07:31.223 Slot 1 Firmware Revision: 1.0 00:07:31.223 00:07:31.223 00:07:31.223 Commands Supported and Effects 00:07:31.223 ============================== 00:07:31.223 Admin Commands 00:07:31.223 -------------- 00:07:31.223 Delete I/O Submission Queue (00h): Supported 00:07:31.223 Create I/O Submission Queue (01h): Supported 00:07:31.223 Get Log Page (02h): Supported 00:07:31.223 Delete I/O Completion Queue (04h): Supported 00:07:31.223 Create I/O Completion Queue (05h): Supported 00:07:31.223 Identify (06h): Supported 00:07:31.223 Abort (08h): Supported 00:07:31.223 Set Features (09h): Supported 00:07:31.223 Get Features (0Ah): Supported 00:07:31.223 Asynchronous Event Request (0Ch): Supported 00:07:31.223 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.223 Directive Send (19h): Supported 00:07:31.223 Directive Receive (1Ah): Supported 00:07:31.223 Virtualization Management (1Ch): Supported 00:07:31.223 Doorbell Buffer Config (7Ch): Supported 00:07:31.223 Format NVM (80h): Supported LBA-Change 00:07:31.223 I/O Commands 00:07:31.223 ------------ 00:07:31.223 Flush (00h): Supported LBA-Change 00:07:31.223 Write (01h): Supported LBA-Change 00:07:31.223 Read (02h): Supported 00:07:31.223 Compare (05h): Supported 00:07:31.223 Write Zeroes (08h): Supported LBA-Change 00:07:31.223 Dataset Management (09h): Supported LBA-Change 00:07:31.223 Unknown (0Ch): Supported 00:07:31.223 Unknown (12h): Supported 00:07:31.223 Copy (19h): Supported LBA-Change 00:07:31.223 Unknown (1Dh): Supported LBA-Change 00:07:31.223 00:07:31.223 Error Log 00:07:31.223 ========= 00:07:31.223 00:07:31.223 Arbitration 00:07:31.223 =========== 00:07:31.223 Arbitration Burst: no limit 00:07:31.223 00:07:31.223 Power Management 00:07:31.223 ================ 00:07:31.223 Number of Power States: 1 00:07:31.223 Current Power State: Power State #0 00:07:31.223 Power State #0: 00:07:31.223 Max Power: 25.00 W 00:07:31.223 Non-Operational State: Operational 00:07:31.223 Entry Latency: 16 microseconds 00:07:31.223 Exit Latency: 4 microseconds 00:07:31.223 Relative Read Throughput: 0 00:07:31.223 Relative Read Latency: 0 00:07:31.223 Relative Write Throughput: 0 00:07:31.223 Relative Write Latency: 0 00:07:31.223 Idle Power: Not Reported 00:07:31.223 Active Power: Not Reported 00:07:31.223 Non-Operational Permissive Mode: Not Supported 00:07:31.223 00:07:31.223 Health Information 00:07:31.223 ================== 00:07:31.223 Critical Warnings: 00:07:31.223 Available Spare Space: OK 00:07:31.223 Temperature: [2024-09-28 23:28:19.281920] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 63342 terminated unexpected 00:07:31.223 OK 00:07:31.223 Device Reliability: OK 00:07:31.223 Read Only: No 00:07:31.223 Volatile Memory Backup: OK 00:07:31.223 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.223 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.223 Available Spare: 0% 00:07:31.223 Available Spare Threshold: 0% 00:07:31.223 Life Percentage Used: 0% 00:07:31.223 Data Units Read: 1040 00:07:31.223 Data Units Written: 913 00:07:31.223 Host Read Commands: 57689 00:07:31.223 Host Write Commands: 56585 00:07:31.223 Controller Busy Time: 0 minutes 00:07:31.223 Power Cycles: 0 00:07:31.223 Power On Hours: 0 hours 00:07:31.223 Unsafe Shutdowns: 0 00:07:31.223 Unrecoverable Media Errors: 0 00:07:31.223 Lifetime Error Log Entries: 0 00:07:31.223 Warning Temperature Time: 0 minutes 00:07:31.223 Critical Temperature Time: 0 minutes 00:07:31.223 00:07:31.223 Number of Queues 00:07:31.223 ================ 00:07:31.223 Number of I/O Submission Queues: 64 00:07:31.223 Number of I/O Completion Queues: 64 00:07:31.224 00:07:31.224 ZNS Specific Controller Data 00:07:31.224 ============================ 00:07:31.224 Zone Append Size Limit: 0 00:07:31.224 00:07:31.224 00:07:31.224 Active Namespaces 00:07:31.224 ================= 00:07:31.224 Namespace ID:1 00:07:31.224 Error Recovery Timeout: Unlimited 00:07:31.224 Command Set Identifier: NVM (00h) 00:07:31.224 Deallocate: Supported 00:07:31.224 Deallocated/Unwritten Error: Supported 00:07:31.224 Deallocated Read Value: All 0x00 00:07:31.224 Deallocate in Write Zeroes: Not Supported 00:07:31.224 Deallocated Guard Field: 0xFFFF 00:07:31.224 Flush: Supported 00:07:31.224 Reservation: Not Supported 00:07:31.224 Namespace Sharing Capabilities: Private 00:07:31.224 Size (in LBAs): 1310720 (5GiB) 00:07:31.224 Capacity (in LBAs): 1310720 (5GiB) 00:07:31.224 Utilization (in LBAs): 1310720 (5GiB) 00:07:31.224 Thin Provisioning: Not Supported 00:07:31.224 Per-NS Atomic Units: No 00:07:31.224 Maximum Single Source Range Length: 128 00:07:31.224 Maximum Copy Length: 128 00:07:31.224 Maximum Source Range Count: 128 00:07:31.224 NGUID/EUI64 Never Reused: No 00:07:31.224 Namespace Write Protected: No 00:07:31.224 Number of LBA Formats: 8 00:07:31.224 Current LBA Format: LBA Format #04 00:07:31.224 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.224 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.224 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.224 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.224 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.224 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.224 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.224 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.224 00:07:31.224 NVM Specific Namespace Data 00:07:31.224 =========================== 00:07:31.224 Logical Block Storage Tag Mask: 0 00:07:31.224 Protection Information Capabilities: 00:07:31.224 16b Guard Protection Information Storage Tag Support: No 00:07:31.224 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.224 Storage Tag Check Read Support: No 00:07:31.224 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.224 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.224 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.224 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.224 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.224 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.224 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.224 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.224 ===================================================== 00:07:31.224 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:31.224 ===================================================== 00:07:31.224 Controller Capabilities/Features 00:07:31.224 ================================ 00:07:31.224 Vendor ID: 1b36 00:07:31.224 Subsystem Vendor ID: 1af4 00:07:31.224 Serial Number: 12343 00:07:31.224 Model Number: QEMU NVMe Ctrl 00:07:31.224 Firmware Version: 8.0.0 00:07:31.224 Recommended Arb Burst: 6 00:07:31.224 IEEE OUI Identifier: 00 54 52 00:07:31.224 Multi-path I/O 00:07:31.224 May have multiple subsystem ports: No 00:07:31.224 May have multiple controllers: Yes 00:07:31.224 Associated with SR-IOV VF: No 00:07:31.224 Max Data Transfer Size: 524288 00:07:31.224 Max Number of Namespaces: 256 00:07:31.224 Max Number of I/O Queues: 64 00:07:31.224 NVMe Specification Version (VS): 1.4 00:07:31.224 NVMe Specification Version (Identify): 1.4 00:07:31.224 Maximum Queue Entries: 2048 00:07:31.224 Contiguous Queues Required: Yes 00:07:31.224 Arbitration Mechanisms Supported 00:07:31.224 Weighted Round Robin: Not Supported 00:07:31.224 Vendor Specific: Not Supported 00:07:31.224 Reset Timeout: 7500 ms 00:07:31.224 Doorbell Stride: 4 bytes 00:07:31.224 NVM Subsystem Reset: Not Supported 00:07:31.224 Command Sets Supported 00:07:31.224 NVM Command Set: Supported 00:07:31.224 Boot Partition: Not Supported 00:07:31.224 Memory Page Size Minimum: 4096 bytes 00:07:31.224 Memory Page Size Maximum: 65536 bytes 00:07:31.224 Persistent Memory Region: Not Supported 00:07:31.224 Optional Asynchronous Events Supported 00:07:31.224 Namespace Attribute Notices: Supported 00:07:31.224 Firmware Activation Notices: Not Supported 00:07:31.224 ANA Change Notices: Not Supported 00:07:31.224 PLE Aggregate Log Change Notices: Not Supported 00:07:31.224 LBA Status Info Alert Notices: Not Supported 00:07:31.224 EGE Aggregate Log Change Notices: Not Supported 00:07:31.224 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.224 Zone Descriptor Change Notices: Not Supported 00:07:31.224 Discovery Log Change Notices: Not Supported 00:07:31.224 Controller Attributes 00:07:31.224 128-bit Host Identifier: Not Supported 00:07:31.224 Non-Operational Permissive Mode: Not Supported 00:07:31.224 NVM Sets: Not Supported 00:07:31.224 Read Recovery Levels: Not Supported 00:07:31.224 Endurance Groups: Supported 00:07:31.224 Predictable Latency Mode: Not Supported 00:07:31.224 Traffic Based Keep ALive: Not Supported 00:07:31.224 Namespace Granularity: Not Supported 00:07:31.224 SQ Associations: Not Supported 00:07:31.224 UUID List: Not Supported 00:07:31.224 Multi-Domain Subsystem: Not Supported 00:07:31.224 Fixed Capacity Management: Not Supported 00:07:31.224 Variable Capacity Management: Not Supported 00:07:31.224 Delete Endurance Group: Not Supported 00:07:31.224 Delete NVM Set: Not Supported 00:07:31.224 Extended LBA Formats Supported: Supported 00:07:31.224 Flexible Data Placement Supported: Supported 00:07:31.224 00:07:31.224 Controller Memory Buffer Support 00:07:31.224 ================================ 00:07:31.224 Supported: No 00:07:31.224 00:07:31.224 Persistent Memory Region Support 00:07:31.224 ================================ 00:07:31.224 Supported: No 00:07:31.224 00:07:31.224 Admin Command Set Attributes 00:07:31.224 ============================ 00:07:31.224 Security Send/Receive: Not Supported 00:07:31.224 Format NVM: Supported 00:07:31.224 Firmware Activate/Download: Not Supported 00:07:31.224 Namespace Management: Supported 00:07:31.224 Device Self-Test: Not Supported 00:07:31.224 Directives: Supported 00:07:31.225 NVMe-MI: Not Supported 00:07:31.225 Virtualization Management: Not Supported 00:07:31.225 Doorbell Buffer Config: Supported 00:07:31.225 Get LBA Status Capability: Not Supported 00:07:31.225 Command & Feature Lockdown Capability: Not Supported 00:07:31.225 Abort Command Limit: 4 00:07:31.225 Async Event Request Limit: 4 00:07:31.225 Number of Firmware Slots: N/A 00:07:31.225 Firmware Slot 1 Read-Only: N/A 00:07:31.225 Firmware Activation Without Reset: N/A 00:07:31.225 Multiple Update Detection Support: N/A 00:07:31.225 Firmware Update Granularity: No Information Provided 00:07:31.225 Per-Namespace SMART Log: Yes 00:07:31.225 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.225 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:31.225 Command Effects Log Page: Supported 00:07:31.225 Get Log Page Extended Data: Supported 00:07:31.225 Telemetry Log Pages: Not Supported 00:07:31.225 Persistent Event Log Pages: Not Supported 00:07:31.225 Supported Log Pages Log Page: May Support 00:07:31.225 Commands Supported & Effects Log Page: Not Supported 00:07:31.225 Feature Identifiers & Effects Log Page:May Support 00:07:31.225 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.225 Data Area 4 for Telemetry Log: Not Supported 00:07:31.225 Error Log Page Entries Supported: 1 00:07:31.225 Keep Alive: Not Supported 00:07:31.225 00:07:31.225 NVM Command Set Attributes 00:07:31.225 ========================== 00:07:31.225 Submission Queue Entry Size 00:07:31.225 Max: 64 00:07:31.225 Min: 64 00:07:31.225 Completion Queue Entry Size 00:07:31.225 Max: 16 00:07:31.225 Min: 16 00:07:31.225 Number of Namespaces: 256 00:07:31.225 Compare Command: Supported 00:07:31.225 Write Uncorrectable Command: Not Supported 00:07:31.225 Dataset Management Command: Supported 00:07:31.225 Write Zeroes Command: Supported 00:07:31.225 Set Features Save Field: Supported 00:07:31.225 Reservations: Not Supported 00:07:31.225 Timestamp: Supported 00:07:31.225 Copy: Supported 00:07:31.225 Volatile Write Cache: Present 00:07:31.225 Atomic Write Unit (Normal): 1 00:07:31.225 Atomic Write Unit (PFail): 1 00:07:31.225 Atomic Compare & Write Unit: 1 00:07:31.225 Fused Compare & Write: Not Supported 00:07:31.225 Scatter-Gather List 00:07:31.225 SGL Command Set: Supported 00:07:31.225 SGL Keyed: Not Supported 00:07:31.225 SGL Bit Bucket Descriptor: Not Supported 00:07:31.225 SGL Metadata Pointer: Not Supported 00:07:31.225 Oversized SGL: Not Supported 00:07:31.225 SGL Metadata Address: Not Supported 00:07:31.225 SGL Offset: Not Supported 00:07:31.225 Transport SGL Data Block: Not Supported 00:07:31.225 Replay Protected Memory Block: Not Supported 00:07:31.225 00:07:31.225 Firmware Slot Information 00:07:31.225 ========================= 00:07:31.225 Active slot: 1 00:07:31.225 Slot 1 Firmware Revision: 1.0 00:07:31.225 00:07:31.225 00:07:31.225 Commands Supported and Effects 00:07:31.225 ============================== 00:07:31.225 Admin Commands 00:07:31.225 -------------- 00:07:31.225 Delete I/O Submission Queue (00h): Supported 00:07:31.225 Create I/O Submission Queue (01h): Supported 00:07:31.225 Get Log Page (02h): Supported 00:07:31.225 Delete I/O Completion Queue (04h): Supported 00:07:31.225 Create I/O Completion Queue (05h): Supported 00:07:31.225 Identify (06h): Supported 00:07:31.225 Abort (08h): Supported 00:07:31.225 Set Features (09h): Supported 00:07:31.225 Get Features (0Ah): Supported 00:07:31.225 Asynchronous Event Request (0Ch): Supported 00:07:31.225 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.225 Directive Send (19h): Supported 00:07:31.225 Directive Receive (1Ah): Supported 00:07:31.225 Virtualization Management (1Ch): Supported 00:07:31.225 Doorbell Buffer Config (7Ch): Supported 00:07:31.225 Format NVM (80h): Supported LBA-Change 00:07:31.225 I/O Commands 00:07:31.225 ------------ 00:07:31.225 Flush (00h): Supported LBA-Change 00:07:31.225 Write (01h): Supported LBA-Change 00:07:31.225 Read (02h): Supported 00:07:31.225 Compare (05h): Supported 00:07:31.225 Write Zeroes (08h): Supported LBA-Change 00:07:31.225 Dataset Management (09h): Supported LBA-Change 00:07:31.225 Unknown (0Ch): Supported 00:07:31.225 Unknown (12h): Supported 00:07:31.225 Copy (19h): Supported LBA-Change 00:07:31.225 Unknown (1Dh): Supported LBA-Change 00:07:31.225 00:07:31.225 Error Log 00:07:31.225 ========= 00:07:31.225 00:07:31.225 Arbitration 00:07:31.225 =========== 00:07:31.225 Arbitration Burst: no limit 00:07:31.225 00:07:31.225 Power Management 00:07:31.225 ================ 00:07:31.225 Number of Power States: 1 00:07:31.225 Current Power State: Power State #0 00:07:31.225 Power State #0: 00:07:31.225 Max Power: 25.00 W 00:07:31.225 Non-Operational State: Operational 00:07:31.225 Entry Latency: 16 microseconds 00:07:31.225 Exit Latency: 4 microseconds 00:07:31.225 Relative Read Throughput: 0 00:07:31.225 Relative Read Latency: 0 00:07:31.225 Relative Write Throughput: 0 00:07:31.225 Relative Write Latency: 0 00:07:31.225 Idle Power: Not Reported 00:07:31.225 Active Power: Not Reported 00:07:31.225 Non-Operational Permissive Mode: Not Supported 00:07:31.225 00:07:31.225 Health Information 00:07:31.225 ================== 00:07:31.225 Critical Warnings: 00:07:31.225 Available Spare Space: OK 00:07:31.225 Temperature: OK 00:07:31.225 Device Reliability: OK 00:07:31.225 Read Only: No 00:07:31.225 Volatile Memory Backup: OK 00:07:31.225 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.225 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.225 Available Spare: 0% 00:07:31.225 Available Spare Threshold: 0% 00:07:31.225 Life Percentage Used: 0% 00:07:31.225 Data Units Read: 1125 00:07:31.225 Data Units Written: 1054 00:07:31.225 Host Read Commands: 42588 00:07:31.225 Host Write Commands: 42011 00:07:31.225 Controller Busy Time: 0 minutes 00:07:31.225 Power Cycles: 0 00:07:31.225 Power On Hours: 0 hours 00:07:31.225 Unsafe Shutdowns: 0 00:07:31.225 Unrecoverable Media Errors: 0 00:07:31.225 Lifetime Error Log Entries: 0 00:07:31.225 Warning Temperature Time: 0 minutes 00:07:31.225 Critical Temperature Time: 0 minutes 00:07:31.225 00:07:31.226 Number of Queues 00:07:31.226 ================ 00:07:31.226 Number of I/O Submission Queues: 64 00:07:31.226 Number of I/O Completion Queues: 64 00:07:31.226 00:07:31.226 ZNS Specific Controller Data 00:07:31.226 ============================ 00:07:31.226 Zone Append Size Limit: 0 00:07:31.226 00:07:31.226 00:07:31.226 Active Namespaces 00:07:31.226 ================= 00:07:31.226 Namespace ID:1 00:07:31.226 Error Recovery Timeout: Unlimited 00:07:31.226 Command Set Identifier: NVM (00h) 00:07:31.226 Deallocate: Supported 00:07:31.226 Deallocated/Unwritten Error: Supported 00:07:31.226 Deallocated Read Value: All 0x00 00:07:31.226 Deallocate in Write Zeroes: Not Supported 00:07:31.226 Deallocated Guard Field: 0xFFFF 00:07:31.226 Flush: Supported 00:07:31.226 Reservation: Not Supported 00:07:31.226 Namespace Sharing Capabilities: Multiple Controllers 00:07:31.226 Size (in LBAs): 262144 (1GiB) 00:07:31.226 Capacity (in LBAs): 262144 (1GiB) 00:07:31.226 Utilization (in LBAs): 262144 (1GiB) 00:07:31.226 Thin Provisioning: Not Supported 00:07:31.226 Per-NS Atomic Units: No 00:07:31.226 Maximum Single Source Range Length: 128 00:07:31.226 Maximum Copy Length: 128 00:07:31.226 Maximum Source Range Count: 128 00:07:31.226 NGUID/EUI64 Never Reused: No 00:07:31.226 Namespace Write Protected: No 00:07:31.226 Endurance group ID: 1 00:07:31.226 Number of LBA Formats: 8 00:07:31.226 Current LBA Format: LBA Format #04 00:07:31.226 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.226 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.226 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.226 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.226 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.226 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.226 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.226 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.226 00:07:31.226 Get Feature FDP: 00:07:31.226 ================ 00:07:31.226 Enabled: Yes 00:07:31.226 FDP configuration index: 0 00:07:31.226 00:07:31.226 FDP configurations log page 00:07:31.226 =========================== 00:07:31.226 Number of FDP configurations: 1 00:07:31.226 Version: 0 00:07:31.226 Size: 112 00:07:31.226 FDP Configuration Descriptor: 0 00:07:31.226 Descriptor Size: 96 00:07:31.226 Reclaim Group Identifier format: 2 00:07:31.226 FDP Volatile Write Cache: Not Present 00:07:31.226 FDP Configuration: Valid 00:07:31.226 Vendor Specific Size: 0 00:07:31.226 Number of Reclaim Groups: 2 00:07:31.226 Number of Recalim Unit Handles: 8 00:07:31.226 Max Placement Identifiers: 128 00:07:31.226 Number of Namespaces Suppprted: 256 00:07:31.226 Reclaim unit Nominal Size: 6000000 bytes 00:07:31.226 Estimated Reclaim Unit Time Limit: Not Reported 00:07:31.226 RUH Desc #000: RUH Type: Initially Isolated 00:07:31.226 RUH Desc #001: RUH Type: Initially Isolated 00:07:31.226 RUH Desc #002: RUH Type: Initially Isolated 00:07:31.226 RUH Desc #003: RUH Type: Initially Isolated 00:07:31.226 RUH Desc #004: RUH Type: Initially Isolated 00:07:31.226 RUH Desc #005: RUH Type: Initially Isolated 00:07:31.226 RUH Desc #006: RUH Type: Initially Isolated 00:07:31.226 RUH Desc #007: RUH Type: Initially Isolated 00:07:31.226 00:07:31.226 FDP reclaim unit handle usage log page 00:07:31.226 ====================================== 00:07:31.226 Number of Reclaim Unit Handles: 8 00:07:31.226 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:31.226 RUH Usage Desc #001: RUH Attributes: Unused 00:07:31.226 RUH Usage Desc #002: RUH Attributes: Unused 00:07:31.226 RUH Usage Desc #003: RUH Attributes: Unused 00:07:31.226 RUH Usage Desc #004: RUH Attributes: Unused 00:07:31.226 RUH Usage Desc #005: RUH Attributes: Unused 00:07:31.226 RUH Usage Desc #006: RUH Attributes: Unused 00:07:31.226 RUH Usage Desc #007: RUH Attributes: Unused 00:07:31.226 00:07:31.226 FDP statistics log page 00:07:31.226 ======================= 00:07:31.226 Host bytes with metadata written: 635215872 00:07:31.226 Me[2024-09-28 23:28:19.285008] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 63342 terminated unexpected 00:07:31.226 dia bytes with metadata written: 635277312 00:07:31.226 Media bytes erased: 0 00:07:31.226 00:07:31.226 FDP events log page 00:07:31.226 =================== 00:07:31.226 Number of FDP events: 0 00:07:31.226 00:07:31.226 NVM Specific Namespace Data 00:07:31.226 =========================== 00:07:31.226 Logical Block Storage Tag Mask: 0 00:07:31.226 Protection Information Capabilities: 00:07:31.226 16b Guard Protection Information Storage Tag Support: No 00:07:31.226 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.226 Storage Tag Check Read Support: No 00:07:31.226 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.226 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.226 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.226 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.226 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.226 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.226 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.226 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.226 ===================================================== 00:07:31.226 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:31.226 ===================================================== 00:07:31.226 Controller Capabilities/Features 00:07:31.226 ================================ 00:07:31.226 Vendor ID: 1b36 00:07:31.226 Subsystem Vendor ID: 1af4 00:07:31.226 Serial Number: 12342 00:07:31.226 Model Number: QEMU NVMe Ctrl 00:07:31.226 Firmware Version: 8.0.0 00:07:31.226 Recommended Arb Burst: 6 00:07:31.226 IEEE OUI Identifier: 00 54 52 00:07:31.226 Multi-path I/O 00:07:31.226 May have multiple subsystem ports: No 00:07:31.226 May have multiple controllers: No 00:07:31.226 Associated with SR-IOV VF: No 00:07:31.226 Max Data Transfer Size: 524288 00:07:31.226 Max Number of Namespaces: 256 00:07:31.226 Max Number of I/O Queues: 64 00:07:31.226 NVMe Specification Version (VS): 1.4 00:07:31.226 NVMe Specification Version (Identify): 1.4 00:07:31.226 Maximum Queue Entries: 2048 00:07:31.226 Contiguous Queues Required: Yes 00:07:31.226 Arbitration Mechanisms Supported 00:07:31.226 Weighted Round Robin: Not Supported 00:07:31.226 Vendor Specific: Not Supported 00:07:31.226 Reset Timeout: 7500 ms 00:07:31.226 Doorbell Stride: 4 bytes 00:07:31.227 NVM Subsystem Reset: Not Supported 00:07:31.227 Command Sets Supported 00:07:31.227 NVM Command Set: Supported 00:07:31.227 Boot Partition: Not Supported 00:07:31.227 Memory Page Size Minimum: 4096 bytes 00:07:31.227 Memory Page Size Maximum: 65536 bytes 00:07:31.227 Persistent Memory Region: Not Supported 00:07:31.227 Optional Asynchronous Events Supported 00:07:31.227 Namespace Attribute Notices: Supported 00:07:31.227 Firmware Activation Notices: Not Supported 00:07:31.227 ANA Change Notices: Not Supported 00:07:31.227 PLE Aggregate Log Change Notices: Not Supported 00:07:31.227 LBA Status Info Alert Notices: Not Supported 00:07:31.227 EGE Aggregate Log Change Notices: Not Supported 00:07:31.227 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.227 Zone Descriptor Change Notices: Not Supported 00:07:31.227 Discovery Log Change Notices: Not Supported 00:07:31.227 Controller Attributes 00:07:31.227 128-bit Host Identifier: Not Supported 00:07:31.227 Non-Operational Permissive Mode: Not Supported 00:07:31.227 NVM Sets: Not Supported 00:07:31.227 Read Recovery Levels: Not Supported 00:07:31.227 Endurance Groups: Not Supported 00:07:31.227 Predictable Latency Mode: Not Supported 00:07:31.227 Traffic Based Keep ALive: Not Supported 00:07:31.227 Namespace Granularity: Not Supported 00:07:31.227 SQ Associations: Not Supported 00:07:31.227 UUID List: Not Supported 00:07:31.227 Multi-Domain Subsystem: Not Supported 00:07:31.227 Fixed Capacity Management: Not Supported 00:07:31.227 Variable Capacity Management: Not Supported 00:07:31.227 Delete Endurance Group: Not Supported 00:07:31.227 Delete NVM Set: Not Supported 00:07:31.227 Extended LBA Formats Supported: Supported 00:07:31.227 Flexible Data Placement Supported: Not Supported 00:07:31.227 00:07:31.227 Controller Memory Buffer Support 00:07:31.227 ================================ 00:07:31.227 Supported: No 00:07:31.227 00:07:31.227 Persistent Memory Region Support 00:07:31.227 ================================ 00:07:31.227 Supported: No 00:07:31.227 00:07:31.227 Admin Command Set Attributes 00:07:31.227 ============================ 00:07:31.227 Security Send/Receive: Not Supported 00:07:31.227 Format NVM: Supported 00:07:31.227 Firmware Activate/Download: Not Supported 00:07:31.227 Namespace Management: Supported 00:07:31.227 Device Self-Test: Not Supported 00:07:31.227 Directives: Supported 00:07:31.227 NVMe-MI: Not Supported 00:07:31.227 Virtualization Management: Not Supported 00:07:31.227 Doorbell Buffer Config: Supported 00:07:31.227 Get LBA Status Capability: Not Supported 00:07:31.227 Command & Feature Lockdown Capability: Not Supported 00:07:31.227 Abort Command Limit: 4 00:07:31.227 Async Event Request Limit: 4 00:07:31.227 Number of Firmware Slots: N/A 00:07:31.227 Firmware Slot 1 Read-Only: N/A 00:07:31.227 Firmware Activation Without Reset: N/A 00:07:31.227 Multiple Update Detection Support: N/A 00:07:31.227 Firmware Update Granularity: No Information Provided 00:07:31.227 Per-Namespace SMART Log: Yes 00:07:31.227 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.227 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:31.227 Command Effects Log Page: Supported 00:07:31.227 Get Log Page Extended Data: Supported 00:07:31.227 Telemetry Log Pages: Not Supported 00:07:31.227 Persistent Event Log Pages: Not Supported 00:07:31.227 Supported Log Pages Log Page: May Support 00:07:31.227 Commands Supported & Effects Log Page: Not Supported 00:07:31.227 Feature Identifiers & Effects Log Page:May Support 00:07:31.227 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.227 Data Area 4 for Telemetry Log: Not Supported 00:07:31.227 Error Log Page Entries Supported: 1 00:07:31.227 Keep Alive: Not Supported 00:07:31.227 00:07:31.227 NVM Command Set Attributes 00:07:31.227 ========================== 00:07:31.227 Submission Queue Entry Size 00:07:31.227 Max: 64 00:07:31.227 Min: 64 00:07:31.227 Completion Queue Entry Size 00:07:31.227 Max: 16 00:07:31.227 Min: 16 00:07:31.227 Number of Namespaces: 256 00:07:31.227 Compare Command: Supported 00:07:31.227 Write Uncorrectable Command: Not Supported 00:07:31.227 Dataset Management Command: Supported 00:07:31.227 Write Zeroes Command: Supported 00:07:31.227 Set Features Save Field: Supported 00:07:31.227 Reservations: Not Supported 00:07:31.227 Timestamp: Supported 00:07:31.227 Copy: Supported 00:07:31.227 Volatile Write Cache: Present 00:07:31.227 Atomic Write Unit (Normal): 1 00:07:31.227 Atomic Write Unit (PFail): 1 00:07:31.227 Atomic Compare & Write Unit: 1 00:07:31.227 Fused Compare & Write: Not Supported 00:07:31.227 Scatter-Gather List 00:07:31.227 SGL Command Set: Supported 00:07:31.227 SGL Keyed: Not Supported 00:07:31.227 SGL Bit Bucket Descriptor: Not Supported 00:07:31.227 SGL Metadata Pointer: Not Supported 00:07:31.227 Oversized SGL: Not Supported 00:07:31.227 SGL Metadata Address: Not Supported 00:07:31.227 SGL Offset: Not Supported 00:07:31.227 Transport SGL Data Block: Not Supported 00:07:31.227 Replay Protected Memory Block: Not Supported 00:07:31.227 00:07:31.227 Firmware Slot Information 00:07:31.227 ========================= 00:07:31.227 Active slot: 1 00:07:31.227 Slot 1 Firmware Revision: 1.0 00:07:31.227 00:07:31.227 00:07:31.227 Commands Supported and Effects 00:07:31.227 ============================== 00:07:31.227 Admin Commands 00:07:31.227 -------------- 00:07:31.227 Delete I/O Submission Queue (00h): Supported 00:07:31.227 Create I/O Submission Queue (01h): Supported 00:07:31.227 Get Log Page (02h): Supported 00:07:31.227 Delete I/O Completion Queue (04h): Supported 00:07:31.227 Create I/O Completion Queue (05h): Supported 00:07:31.227 Identify (06h): Supported 00:07:31.227 Abort (08h): Supported 00:07:31.227 Set Features (09h): Supported 00:07:31.227 Get Features (0Ah): Supported 00:07:31.227 Asynchronous Event Request (0Ch): Supported 00:07:31.227 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.227 Directive Send (19h): Supported 00:07:31.227 Directive Receive (1Ah): Supported 00:07:31.227 Virtualization Management (1Ch): Supported 00:07:31.227 Doorbell Buffer Config (7Ch): Supported 00:07:31.227 Format NVM (80h): Supported LBA-Change 00:07:31.227 I/O Commands 00:07:31.227 ------------ 00:07:31.227 Flush (00h): Supported LBA-Change 00:07:31.227 Write (01h): Supported LBA-Change 00:07:31.227 Read (02h): Supported 00:07:31.227 Compare (05h): Supported 00:07:31.227 Write Zeroes (08h): Supported LBA-Change 00:07:31.227 Dataset Management (09h): Supported LBA-Change 00:07:31.227 Unknown (0Ch): Supported 00:07:31.228 Unknown (12h): Supported 00:07:31.228 Copy (19h): Supported LBA-Change 00:07:31.228 Unknown (1Dh): Supported LBA-Change 00:07:31.228 00:07:31.228 Error Log 00:07:31.228 ========= 00:07:31.228 00:07:31.228 Arbitration 00:07:31.228 =========== 00:07:31.228 Arbitration Burst: no limit 00:07:31.228 00:07:31.228 Power Management 00:07:31.228 ================ 00:07:31.228 Number of Power States: 1 00:07:31.228 Current Power State: Power State #0 00:07:31.228 Power State #0: 00:07:31.228 Max Power: 25.00 W 00:07:31.228 Non-Operational State: Operational 00:07:31.228 Entry Latency: 16 microseconds 00:07:31.228 Exit Latency: 4 microseconds 00:07:31.228 Relative Read Throughput: 0 00:07:31.228 Relative Read Latency: 0 00:07:31.228 Relative Write Throughput: 0 00:07:31.228 Relative Write Latency: 0 00:07:31.228 Idle Power: Not Reported 00:07:31.228 Active Power: Not Reported 00:07:31.228 Non-Operational Permissive Mode: Not Supported 00:07:31.228 00:07:31.228 Health Information 00:07:31.228 ================== 00:07:31.228 Critical Warnings: 00:07:31.228 Available Spare Space: OK 00:07:31.228 Temperature: OK 00:07:31.228 Device Reliability: OK 00:07:31.228 Read Only: No 00:07:31.228 Volatile Memory Backup: OK 00:07:31.228 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.228 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.228 Available Spare: 0% 00:07:31.228 Available Spare Threshold: 0% 00:07:31.228 Life Percentage Used: 0% 00:07:31.228 Data Units Read: 2355 00:07:31.228 Data Units Written: 2142 00:07:31.228 Host Read Commands: 119375 00:07:31.228 Host Write Commands: 117645 00:07:31.228 Controller Busy Time: 0 minutes 00:07:31.228 Power Cycles: 0 00:07:31.228 Power On Hours: 0 hours 00:07:31.228 Unsafe Shutdowns: 0 00:07:31.228 Unrecoverable Media Errors: 0 00:07:31.228 Lifetime Error Log Entries: 0 00:07:31.228 Warning Temperature Time: 0 minutes 00:07:31.228 Critical Temperature Time: 0 minutes 00:07:31.228 00:07:31.228 Number of Queues 00:07:31.228 ================ 00:07:31.228 Number of I/O Submission Queues: 64 00:07:31.228 Number of I/O Completion Queues: 64 00:07:31.228 00:07:31.228 ZNS Specific Controller Data 00:07:31.228 ============================ 00:07:31.228 Zone Append Size Limit: 0 00:07:31.228 00:07:31.228 00:07:31.228 Active Namespaces 00:07:31.228 ================= 00:07:31.228 Namespace ID:1 00:07:31.228 Error Recovery Timeout: Unlimited 00:07:31.228 Command Set Identifier: NVM (00h) 00:07:31.228 Deallocate: Supported 00:07:31.228 Deallocated/Unwritten Error: Supported 00:07:31.228 Deallocated Read Value: All 0x00 00:07:31.228 Deallocate in Write Zeroes: Not Supported 00:07:31.228 Deallocated Guard Field: 0xFFFF 00:07:31.228 Flush: Supported 00:07:31.228 Reservation: Not Supported 00:07:31.228 Namespace Sharing Capabilities: Private 00:07:31.228 Size (in LBAs): 1048576 (4GiB) 00:07:31.228 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.228 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.228 Thin Provisioning: Not Supported 00:07:31.228 Per-NS Atomic Units: No 00:07:31.228 Maximum Single Source Range Length: 128 00:07:31.228 Maximum Copy Length: 128 00:07:31.228 Maximum Source Range Count: 128 00:07:31.228 NGUID/EUI64 Never Reused: No 00:07:31.228 Namespace Write Protected: No 00:07:31.228 Number of LBA Formats: 8 00:07:31.228 Current LBA Format: LBA Format #04 00:07:31.228 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.228 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.228 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.228 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.228 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.228 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.228 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.228 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.228 00:07:31.228 NVM Specific Namespace Data 00:07:31.228 =========================== 00:07:31.228 Logical Block Storage Tag Mask: 0 00:07:31.228 Protection Information Capabilities: 00:07:31.228 16b Guard Protection Information Storage Tag Support: No 00:07:31.228 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.228 Storage Tag Check Read Support: No 00:07:31.228 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.228 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.228 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.228 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.228 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.228 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.228 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.228 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.228 Namespace ID:2 00:07:31.228 Error Recovery Timeout: Unlimited 00:07:31.228 Command Set Identifier: NVM (00h) 00:07:31.228 Deallocate: Supported 00:07:31.228 Deallocated/Unwritten Error: Supported 00:07:31.228 Deallocated Read Value: All 0x00 00:07:31.228 Deallocate in Write Zeroes: Not Supported 00:07:31.228 Deallocated Guard Field: 0xFFFF 00:07:31.228 Flush: Supported 00:07:31.228 Reservation: Not Supported 00:07:31.228 Namespace Sharing Capabilities: Private 00:07:31.228 Size (in LBAs): 1048576 (4GiB) 00:07:31.228 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.228 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.228 Thin Provisioning: Not Supported 00:07:31.228 Per-NS Atomic Units: No 00:07:31.228 Maximum Single Source Range Length: 128 00:07:31.228 Maximum Copy Length: 128 00:07:31.228 Maximum Source Range Count: 128 00:07:31.228 NGUID/EUI64 Never Reused: No 00:07:31.228 Namespace Write Protected: No 00:07:31.228 Number of LBA Formats: 8 00:07:31.228 Current LBA Format: LBA Format #04 00:07:31.228 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.228 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.228 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.228 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.228 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.228 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.228 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.228 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.228 00:07:31.228 NVM Specific Namespace Data 00:07:31.228 =========================== 00:07:31.228 Logical Block Storage Tag Mask: 0 00:07:31.228 Protection Information Capabilities: 00:07:31.228 16b Guard Protection Information Storage Tag Support: No 00:07:31.228 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.229 Storage Tag Check Read Support: No 00:07:31.229 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Namespace ID:3 00:07:31.229 Error Recovery Timeout: Unlimited 00:07:31.229 Command Set Identifier: NVM (00h) 00:07:31.229 Deallocate: Supported 00:07:31.229 Deallocated/Unwritten Error: Supported 00:07:31.229 Deallocated Read Value: All 0x00 00:07:31.229 Deallocate in Write Zeroes: Not Supported 00:07:31.229 Deallocated Guard Field: 0xFFFF 00:07:31.229 Flush: Supported 00:07:31.229 Reservation: Not Supported 00:07:31.229 Namespace Sharing Capabilities: Private 00:07:31.229 Size (in LBAs): 1048576 (4GiB) 00:07:31.229 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.229 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.229 Thin Provisioning: Not Supported 00:07:31.229 Per-NS Atomic Units: No 00:07:31.229 Maximum Single Source Range Length: 128 00:07:31.229 Maximum Copy Length: 128 00:07:31.229 Maximum Source Range Count: 128 00:07:31.229 NGUID/EUI64 Never Reused: No 00:07:31.229 Namespace Write Protected: No 00:07:31.229 Number of LBA Formats: 8 00:07:31.229 Current LBA Format: LBA Format #04 00:07:31.229 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.229 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.229 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.229 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.229 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.229 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.229 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.229 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.229 00:07:31.229 NVM Specific Namespace Data 00:07:31.229 =========================== 00:07:31.229 Logical Block Storage Tag Mask: 0 00:07:31.229 Protection Information Capabilities: 00:07:31.229 16b Guard Protection Information Storage Tag Support: No 00:07:31.229 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.229 Storage Tag Check Read Support: No 00:07:31.229 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.229 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:31.229 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:31.491 ===================================================== 00:07:31.491 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:31.491 ===================================================== 00:07:31.491 Controller Capabilities/Features 00:07:31.491 ================================ 00:07:31.491 Vendor ID: 1b36 00:07:31.491 Subsystem Vendor ID: 1af4 00:07:31.491 Serial Number: 12340 00:07:31.491 Model Number: QEMU NVMe Ctrl 00:07:31.491 Firmware Version: 8.0.0 00:07:31.491 Recommended Arb Burst: 6 00:07:31.491 IEEE OUI Identifier: 00 54 52 00:07:31.491 Multi-path I/O 00:07:31.491 May have multiple subsystem ports: No 00:07:31.491 May have multiple controllers: No 00:07:31.491 Associated with SR-IOV VF: No 00:07:31.491 Max Data Transfer Size: 524288 00:07:31.491 Max Number of Namespaces: 256 00:07:31.491 Max Number of I/O Queues: 64 00:07:31.491 NVMe Specification Version (VS): 1.4 00:07:31.491 NVMe Specification Version (Identify): 1.4 00:07:31.491 Maximum Queue Entries: 2048 00:07:31.491 Contiguous Queues Required: Yes 00:07:31.491 Arbitration Mechanisms Supported 00:07:31.491 Weighted Round Robin: Not Supported 00:07:31.491 Vendor Specific: Not Supported 00:07:31.491 Reset Timeout: 7500 ms 00:07:31.491 Doorbell Stride: 4 bytes 00:07:31.491 NVM Subsystem Reset: Not Supported 00:07:31.491 Command Sets Supported 00:07:31.491 NVM Command Set: Supported 00:07:31.491 Boot Partition: Not Supported 00:07:31.491 Memory Page Size Minimum: 4096 bytes 00:07:31.491 Memory Page Size Maximum: 65536 bytes 00:07:31.491 Persistent Memory Region: Not Supported 00:07:31.491 Optional Asynchronous Events Supported 00:07:31.491 Namespace Attribute Notices: Supported 00:07:31.491 Firmware Activation Notices: Not Supported 00:07:31.491 ANA Change Notices: Not Supported 00:07:31.491 PLE Aggregate Log Change Notices: Not Supported 00:07:31.491 LBA Status Info Alert Notices: Not Supported 00:07:31.491 EGE Aggregate Log Change Notices: Not Supported 00:07:31.491 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.491 Zone Descriptor Change Notices: Not Supported 00:07:31.491 Discovery Log Change Notices: Not Supported 00:07:31.491 Controller Attributes 00:07:31.491 128-bit Host Identifier: Not Supported 00:07:31.491 Non-Operational Permissive Mode: Not Supported 00:07:31.491 NVM Sets: Not Supported 00:07:31.491 Read Recovery Levels: Not Supported 00:07:31.491 Endurance Groups: Not Supported 00:07:31.491 Predictable Latency Mode: Not Supported 00:07:31.491 Traffic Based Keep ALive: Not Supported 00:07:31.491 Namespace Granularity: Not Supported 00:07:31.491 SQ Associations: Not Supported 00:07:31.491 UUID List: Not Supported 00:07:31.491 Multi-Domain Subsystem: Not Supported 00:07:31.491 Fixed Capacity Management: Not Supported 00:07:31.491 Variable Capacity Management: Not Supported 00:07:31.491 Delete Endurance Group: Not Supported 00:07:31.491 Delete NVM Set: Not Supported 00:07:31.491 Extended LBA Formats Supported: Supported 00:07:31.491 Flexible Data Placement Supported: Not Supported 00:07:31.491 00:07:31.491 Controller Memory Buffer Support 00:07:31.491 ================================ 00:07:31.491 Supported: No 00:07:31.491 00:07:31.491 Persistent Memory Region Support 00:07:31.491 ================================ 00:07:31.491 Supported: No 00:07:31.491 00:07:31.491 Admin Command Set Attributes 00:07:31.491 ============================ 00:07:31.491 Security Send/Receive: Not Supported 00:07:31.491 Format NVM: Supported 00:07:31.491 Firmware Activate/Download: Not Supported 00:07:31.491 Namespace Management: Supported 00:07:31.491 Device Self-Test: Not Supported 00:07:31.491 Directives: Supported 00:07:31.491 NVMe-MI: Not Supported 00:07:31.491 Virtualization Management: Not Supported 00:07:31.491 Doorbell Buffer Config: Supported 00:07:31.491 Get LBA Status Capability: Not Supported 00:07:31.491 Command & Feature Lockdown Capability: Not Supported 00:07:31.491 Abort Command Limit: 4 00:07:31.491 Async Event Request Limit: 4 00:07:31.491 Number of Firmware Slots: N/A 00:07:31.491 Firmware Slot 1 Read-Only: N/A 00:07:31.491 Firmware Activation Without Reset: N/A 00:07:31.491 Multiple Update Detection Support: N/A 00:07:31.491 Firmware Update Granularity: No Information Provided 00:07:31.491 Per-Namespace SMART Log: Yes 00:07:31.491 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.491 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:31.491 Command Effects Log Page: Supported 00:07:31.492 Get Log Page Extended Data: Supported 00:07:31.492 Telemetry Log Pages: Not Supported 00:07:31.492 Persistent Event Log Pages: Not Supported 00:07:31.492 Supported Log Pages Log Page: May Support 00:07:31.492 Commands Supported & Effects Log Page: Not Supported 00:07:31.492 Feature Identifiers & Effects Log Page:May Support 00:07:31.492 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.492 Data Area 4 for Telemetry Log: Not Supported 00:07:31.492 Error Log Page Entries Supported: 1 00:07:31.492 Keep Alive: Not Supported 00:07:31.492 00:07:31.492 NVM Command Set Attributes 00:07:31.492 ========================== 00:07:31.492 Submission Queue Entry Size 00:07:31.492 Max: 64 00:07:31.492 Min: 64 00:07:31.492 Completion Queue Entry Size 00:07:31.492 Max: 16 00:07:31.492 Min: 16 00:07:31.492 Number of Namespaces: 256 00:07:31.492 Compare Command: Supported 00:07:31.492 Write Uncorrectable Command: Not Supported 00:07:31.492 Dataset Management Command: Supported 00:07:31.492 Write Zeroes Command: Supported 00:07:31.492 Set Features Save Field: Supported 00:07:31.492 Reservations: Not Supported 00:07:31.492 Timestamp: Supported 00:07:31.492 Copy: Supported 00:07:31.492 Volatile Write Cache: Present 00:07:31.492 Atomic Write Unit (Normal): 1 00:07:31.492 Atomic Write Unit (PFail): 1 00:07:31.492 Atomic Compare & Write Unit: 1 00:07:31.492 Fused Compare & Write: Not Supported 00:07:31.492 Scatter-Gather List 00:07:31.492 SGL Command Set: Supported 00:07:31.492 SGL Keyed: Not Supported 00:07:31.492 SGL Bit Bucket Descriptor: Not Supported 00:07:31.492 SGL Metadata Pointer: Not Supported 00:07:31.492 Oversized SGL: Not Supported 00:07:31.492 SGL Metadata Address: Not Supported 00:07:31.492 SGL Offset: Not Supported 00:07:31.492 Transport SGL Data Block: Not Supported 00:07:31.492 Replay Protected Memory Block: Not Supported 00:07:31.492 00:07:31.492 Firmware Slot Information 00:07:31.492 ========================= 00:07:31.492 Active slot: 1 00:07:31.492 Slot 1 Firmware Revision: 1.0 00:07:31.492 00:07:31.492 00:07:31.492 Commands Supported and Effects 00:07:31.492 ============================== 00:07:31.492 Admin Commands 00:07:31.492 -------------- 00:07:31.492 Delete I/O Submission Queue (00h): Supported 00:07:31.492 Create I/O Submission Queue (01h): Supported 00:07:31.492 Get Log Page (02h): Supported 00:07:31.492 Delete I/O Completion Queue (04h): Supported 00:07:31.492 Create I/O Completion Queue (05h): Supported 00:07:31.492 Identify (06h): Supported 00:07:31.492 Abort (08h): Supported 00:07:31.492 Set Features (09h): Supported 00:07:31.492 Get Features (0Ah): Supported 00:07:31.492 Asynchronous Event Request (0Ch): Supported 00:07:31.492 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.492 Directive Send (19h): Supported 00:07:31.492 Directive Receive (1Ah): Supported 00:07:31.492 Virtualization Management (1Ch): Supported 00:07:31.492 Doorbell Buffer Config (7Ch): Supported 00:07:31.492 Format NVM (80h): Supported LBA-Change 00:07:31.492 I/O Commands 00:07:31.492 ------------ 00:07:31.492 Flush (00h): Supported LBA-Change 00:07:31.492 Write (01h): Supported LBA-Change 00:07:31.492 Read (02h): Supported 00:07:31.492 Compare (05h): Supported 00:07:31.492 Write Zeroes (08h): Supported LBA-Change 00:07:31.492 Dataset Management (09h): Supported LBA-Change 00:07:31.492 Unknown (0Ch): Supported 00:07:31.492 Unknown (12h): Supported 00:07:31.492 Copy (19h): Supported LBA-Change 00:07:31.492 Unknown (1Dh): Supported LBA-Change 00:07:31.492 00:07:31.492 Error Log 00:07:31.492 ========= 00:07:31.492 00:07:31.492 Arbitration 00:07:31.492 =========== 00:07:31.492 Arbitration Burst: no limit 00:07:31.492 00:07:31.492 Power Management 00:07:31.492 ================ 00:07:31.492 Number of Power States: 1 00:07:31.492 Current Power State: Power State #0 00:07:31.492 Power State #0: 00:07:31.492 Max Power: 25.00 W 00:07:31.492 Non-Operational State: Operational 00:07:31.492 Entry Latency: 16 microseconds 00:07:31.492 Exit Latency: 4 microseconds 00:07:31.492 Relative Read Throughput: 0 00:07:31.492 Relative Read Latency: 0 00:07:31.492 Relative Write Throughput: 0 00:07:31.492 Relative Write Latency: 0 00:07:31.492 Idle Power: Not Reported 00:07:31.492 Active Power: Not Reported 00:07:31.492 Non-Operational Permissive Mode: Not Supported 00:07:31.492 00:07:31.492 Health Information 00:07:31.492 ================== 00:07:31.492 Critical Warnings: 00:07:31.492 Available Spare Space: OK 00:07:31.492 Temperature: OK 00:07:31.492 Device Reliability: OK 00:07:31.492 Read Only: No 00:07:31.492 Volatile Memory Backup: OK 00:07:31.492 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.492 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.492 Available Spare: 0% 00:07:31.492 Available Spare Threshold: 0% 00:07:31.492 Life Percentage Used: 0% 00:07:31.492 Data Units Read: 665 00:07:31.492 Data Units Written: 593 00:07:31.492 Host Read Commands: 38457 00:07:31.492 Host Write Commands: 38243 00:07:31.492 Controller Busy Time: 0 minutes 00:07:31.492 Power Cycles: 0 00:07:31.492 Power On Hours: 0 hours 00:07:31.492 Unsafe Shutdowns: 0 00:07:31.492 Unrecoverable Media Errors: 0 00:07:31.492 Lifetime Error Log Entries: 0 00:07:31.492 Warning Temperature Time: 0 minutes 00:07:31.492 Critical Temperature Time: 0 minutes 00:07:31.492 00:07:31.492 Number of Queues 00:07:31.492 ================ 00:07:31.492 Number of I/O Submission Queues: 64 00:07:31.492 Number of I/O Completion Queues: 64 00:07:31.492 00:07:31.492 ZNS Specific Controller Data 00:07:31.492 ============================ 00:07:31.492 Zone Append Size Limit: 0 00:07:31.492 00:07:31.492 00:07:31.492 Active Namespaces 00:07:31.492 ================= 00:07:31.492 Namespace ID:1 00:07:31.492 Error Recovery Timeout: Unlimited 00:07:31.492 Command Set Identifier: NVM (00h) 00:07:31.492 Deallocate: Supported 00:07:31.492 Deallocated/Unwritten Error: Supported 00:07:31.492 Deallocated Read Value: All 0x00 00:07:31.492 Deallocate in Write Zeroes: Not Supported 00:07:31.492 Deallocated Guard Field: 0xFFFF 00:07:31.492 Flush: Supported 00:07:31.492 Reservation: Not Supported 00:07:31.492 Metadata Transferred as: Separate Metadata Buffer 00:07:31.492 Namespace Sharing Capabilities: Private 00:07:31.492 Size (in LBAs): 1548666 (5GiB) 00:07:31.492 Capacity (in LBAs): 1548666 (5GiB) 00:07:31.492 Utilization (in LBAs): 1548666 (5GiB) 00:07:31.492 Thin Provisioning: Not Supported 00:07:31.492 Per-NS Atomic Units: No 00:07:31.492 Maximum Single Source Range Length: 128 00:07:31.492 Maximum Copy Length: 128 00:07:31.492 Maximum Source Range Count: 128 00:07:31.492 NGUID/EUI64 Never Reused: No 00:07:31.492 Namespace Write Protected: No 00:07:31.492 Number of LBA Formats: 8 00:07:31.492 Current LBA Format: LBA Format #07 00:07:31.492 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.492 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.492 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.492 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.492 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.492 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.492 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.492 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.492 00:07:31.492 NVM Specific Namespace Data 00:07:31.492 =========================== 00:07:31.492 Logical Block Storage Tag Mask: 0 00:07:31.492 Protection Information Capabilities: 00:07:31.492 16b Guard Protection Information Storage Tag Support: No 00:07:31.492 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.492 Storage Tag Check Read Support: No 00:07:31.492 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.492 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.492 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.492 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.492 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.492 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.492 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.492 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.492 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:31.492 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:31.753 ===================================================== 00:07:31.753 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:31.753 ===================================================== 00:07:31.753 Controller Capabilities/Features 00:07:31.753 ================================ 00:07:31.753 Vendor ID: 1b36 00:07:31.753 Subsystem Vendor ID: 1af4 00:07:31.753 Serial Number: 12341 00:07:31.753 Model Number: QEMU NVMe Ctrl 00:07:31.753 Firmware Version: 8.0.0 00:07:31.753 Recommended Arb Burst: 6 00:07:31.753 IEEE OUI Identifier: 00 54 52 00:07:31.753 Multi-path I/O 00:07:31.753 May have multiple subsystem ports: No 00:07:31.753 May have multiple controllers: No 00:07:31.753 Associated with SR-IOV VF: No 00:07:31.753 Max Data Transfer Size: 524288 00:07:31.753 Max Number of Namespaces: 256 00:07:31.753 Max Number of I/O Queues: 64 00:07:31.753 NVMe Specification Version (VS): 1.4 00:07:31.753 NVMe Specification Version (Identify): 1.4 00:07:31.753 Maximum Queue Entries: 2048 00:07:31.753 Contiguous Queues Required: Yes 00:07:31.753 Arbitration Mechanisms Supported 00:07:31.753 Weighted Round Robin: Not Supported 00:07:31.753 Vendor Specific: Not Supported 00:07:31.753 Reset Timeout: 7500 ms 00:07:31.753 Doorbell Stride: 4 bytes 00:07:31.753 NVM Subsystem Reset: Not Supported 00:07:31.753 Command Sets Supported 00:07:31.753 NVM Command Set: Supported 00:07:31.753 Boot Partition: Not Supported 00:07:31.753 Memory Page Size Minimum: 4096 bytes 00:07:31.753 Memory Page Size Maximum: 65536 bytes 00:07:31.753 Persistent Memory Region: Not Supported 00:07:31.753 Optional Asynchronous Events Supported 00:07:31.753 Namespace Attribute Notices: Supported 00:07:31.753 Firmware Activation Notices: Not Supported 00:07:31.753 ANA Change Notices: Not Supported 00:07:31.753 PLE Aggregate Log Change Notices: Not Supported 00:07:31.753 LBA Status Info Alert Notices: Not Supported 00:07:31.753 EGE Aggregate Log Change Notices: Not Supported 00:07:31.753 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.753 Zone Descriptor Change Notices: Not Supported 00:07:31.753 Discovery Log Change Notices: Not Supported 00:07:31.753 Controller Attributes 00:07:31.753 128-bit Host Identifier: Not Supported 00:07:31.753 Non-Operational Permissive Mode: Not Supported 00:07:31.753 NVM Sets: Not Supported 00:07:31.753 Read Recovery Levels: Not Supported 00:07:31.753 Endurance Groups: Not Supported 00:07:31.753 Predictable Latency Mode: Not Supported 00:07:31.753 Traffic Based Keep ALive: Not Supported 00:07:31.753 Namespace Granularity: Not Supported 00:07:31.753 SQ Associations: Not Supported 00:07:31.753 UUID List: Not Supported 00:07:31.753 Multi-Domain Subsystem: Not Supported 00:07:31.753 Fixed Capacity Management: Not Supported 00:07:31.753 Variable Capacity Management: Not Supported 00:07:31.753 Delete Endurance Group: Not Supported 00:07:31.753 Delete NVM Set: Not Supported 00:07:31.753 Extended LBA Formats Supported: Supported 00:07:31.753 Flexible Data Placement Supported: Not Supported 00:07:31.753 00:07:31.753 Controller Memory Buffer Support 00:07:31.753 ================================ 00:07:31.753 Supported: No 00:07:31.753 00:07:31.753 Persistent Memory Region Support 00:07:31.753 ================================ 00:07:31.753 Supported: No 00:07:31.753 00:07:31.753 Admin Command Set Attributes 00:07:31.753 ============================ 00:07:31.753 Security Send/Receive: Not Supported 00:07:31.753 Format NVM: Supported 00:07:31.753 Firmware Activate/Download: Not Supported 00:07:31.753 Namespace Management: Supported 00:07:31.753 Device Self-Test: Not Supported 00:07:31.753 Directives: Supported 00:07:31.753 NVMe-MI: Not Supported 00:07:31.753 Virtualization Management: Not Supported 00:07:31.753 Doorbell Buffer Config: Supported 00:07:31.753 Get LBA Status Capability: Not Supported 00:07:31.753 Command & Feature Lockdown Capability: Not Supported 00:07:31.753 Abort Command Limit: 4 00:07:31.753 Async Event Request Limit: 4 00:07:31.753 Number of Firmware Slots: N/A 00:07:31.753 Firmware Slot 1 Read-Only: N/A 00:07:31.753 Firmware Activation Without Reset: N/A 00:07:31.753 Multiple Update Detection Support: N/A 00:07:31.753 Firmware Update Granularity: No Information Provided 00:07:31.753 Per-Namespace SMART Log: Yes 00:07:31.753 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.753 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:31.753 Command Effects Log Page: Supported 00:07:31.753 Get Log Page Extended Data: Supported 00:07:31.753 Telemetry Log Pages: Not Supported 00:07:31.753 Persistent Event Log Pages: Not Supported 00:07:31.754 Supported Log Pages Log Page: May Support 00:07:31.754 Commands Supported & Effects Log Page: Not Supported 00:07:31.754 Feature Identifiers & Effects Log Page:May Support 00:07:31.754 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.754 Data Area 4 for Telemetry Log: Not Supported 00:07:31.754 Error Log Page Entries Supported: 1 00:07:31.754 Keep Alive: Not Supported 00:07:31.754 00:07:31.754 NVM Command Set Attributes 00:07:31.754 ========================== 00:07:31.754 Submission Queue Entry Size 00:07:31.754 Max: 64 00:07:31.754 Min: 64 00:07:31.754 Completion Queue Entry Size 00:07:31.754 Max: 16 00:07:31.754 Min: 16 00:07:31.754 Number of Namespaces: 256 00:07:31.754 Compare Command: Supported 00:07:31.754 Write Uncorrectable Command: Not Supported 00:07:31.754 Dataset Management Command: Supported 00:07:31.754 Write Zeroes Command: Supported 00:07:31.754 Set Features Save Field: Supported 00:07:31.754 Reservations: Not Supported 00:07:31.754 Timestamp: Supported 00:07:31.754 Copy: Supported 00:07:31.754 Volatile Write Cache: Present 00:07:31.754 Atomic Write Unit (Normal): 1 00:07:31.754 Atomic Write Unit (PFail): 1 00:07:31.754 Atomic Compare & Write Unit: 1 00:07:31.754 Fused Compare & Write: Not Supported 00:07:31.754 Scatter-Gather List 00:07:31.754 SGL Command Set: Supported 00:07:31.754 SGL Keyed: Not Supported 00:07:31.754 SGL Bit Bucket Descriptor: Not Supported 00:07:31.754 SGL Metadata Pointer: Not Supported 00:07:31.754 Oversized SGL: Not Supported 00:07:31.754 SGL Metadata Address: Not Supported 00:07:31.754 SGL Offset: Not Supported 00:07:31.754 Transport SGL Data Block: Not Supported 00:07:31.754 Replay Protected Memory Block: Not Supported 00:07:31.754 00:07:31.754 Firmware Slot Information 00:07:31.754 ========================= 00:07:31.754 Active slot: 1 00:07:31.754 Slot 1 Firmware Revision: 1.0 00:07:31.754 00:07:31.754 00:07:31.754 Commands Supported and Effects 00:07:31.754 ============================== 00:07:31.754 Admin Commands 00:07:31.754 -------------- 00:07:31.754 Delete I/O Submission Queue (00h): Supported 00:07:31.754 Create I/O Submission Queue (01h): Supported 00:07:31.754 Get Log Page (02h): Supported 00:07:31.754 Delete I/O Completion Queue (04h): Supported 00:07:31.754 Create I/O Completion Queue (05h): Supported 00:07:31.754 Identify (06h): Supported 00:07:31.754 Abort (08h): Supported 00:07:31.754 Set Features (09h): Supported 00:07:31.754 Get Features (0Ah): Supported 00:07:31.754 Asynchronous Event Request (0Ch): Supported 00:07:31.754 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.754 Directive Send (19h): Supported 00:07:31.754 Directive Receive (1Ah): Supported 00:07:31.754 Virtualization Management (1Ch): Supported 00:07:31.754 Doorbell Buffer Config (7Ch): Supported 00:07:31.754 Format NVM (80h): Supported LBA-Change 00:07:31.754 I/O Commands 00:07:31.754 ------------ 00:07:31.754 Flush (00h): Supported LBA-Change 00:07:31.754 Write (01h): Supported LBA-Change 00:07:31.754 Read (02h): Supported 00:07:31.754 Compare (05h): Supported 00:07:31.754 Write Zeroes (08h): Supported LBA-Change 00:07:31.754 Dataset Management (09h): Supported LBA-Change 00:07:31.754 Unknown (0Ch): Supported 00:07:31.754 Unknown (12h): Supported 00:07:31.754 Copy (19h): Supported LBA-Change 00:07:31.754 Unknown (1Dh): Supported LBA-Change 00:07:31.754 00:07:31.754 Error Log 00:07:31.754 ========= 00:07:31.754 00:07:31.754 Arbitration 00:07:31.754 =========== 00:07:31.754 Arbitration Burst: no limit 00:07:31.754 00:07:31.754 Power Management 00:07:31.754 ================ 00:07:31.754 Number of Power States: 1 00:07:31.754 Current Power State: Power State #0 00:07:31.754 Power State #0: 00:07:31.754 Max Power: 25.00 W 00:07:31.754 Non-Operational State: Operational 00:07:31.754 Entry Latency: 16 microseconds 00:07:31.754 Exit Latency: 4 microseconds 00:07:31.754 Relative Read Throughput: 0 00:07:31.754 Relative Read Latency: 0 00:07:31.754 Relative Write Throughput: 0 00:07:31.754 Relative Write Latency: 0 00:07:31.754 Idle Power: Not Reported 00:07:31.754 Active Power: Not Reported 00:07:31.754 Non-Operational Permissive Mode: Not Supported 00:07:31.754 00:07:31.754 Health Information 00:07:31.754 ================== 00:07:31.754 Critical Warnings: 00:07:31.754 Available Spare Space: OK 00:07:31.754 Temperature: OK 00:07:31.754 Device Reliability: OK 00:07:31.754 Read Only: No 00:07:31.754 Volatile Memory Backup: OK 00:07:31.754 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.754 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.754 Available Spare: 0% 00:07:31.754 Available Spare Threshold: 0% 00:07:31.754 Life Percentage Used: 0% 00:07:31.754 Data Units Read: 1040 00:07:31.754 Data Units Written: 913 00:07:31.754 Host Read Commands: 57689 00:07:31.754 Host Write Commands: 56585 00:07:31.754 Controller Busy Time: 0 minutes 00:07:31.754 Power Cycles: 0 00:07:31.754 Power On Hours: 0 hours 00:07:31.754 Unsafe Shutdowns: 0 00:07:31.754 Unrecoverable Media Errors: 0 00:07:31.754 Lifetime Error Log Entries: 0 00:07:31.754 Warning Temperature Time: 0 minutes 00:07:31.754 Critical Temperature Time: 0 minutes 00:07:31.754 00:07:31.754 Number of Queues 00:07:31.754 ================ 00:07:31.754 Number of I/O Submission Queues: 64 00:07:31.754 Number of I/O Completion Queues: 64 00:07:31.754 00:07:31.754 ZNS Specific Controller Data 00:07:31.754 ============================ 00:07:31.754 Zone Append Size Limit: 0 00:07:31.754 00:07:31.754 00:07:31.754 Active Namespaces 00:07:31.754 ================= 00:07:31.754 Namespace ID:1 00:07:31.754 Error Recovery Timeout: Unlimited 00:07:31.754 Command Set Identifier: NVM (00h) 00:07:31.754 Deallocate: Supported 00:07:31.754 Deallocated/Unwritten Error: Supported 00:07:31.754 Deallocated Read Value: All 0x00 00:07:31.754 Deallocate in Write Zeroes: Not Supported 00:07:31.754 Deallocated Guard Field: 0xFFFF 00:07:31.754 Flush: Supported 00:07:31.754 Reservation: Not Supported 00:07:31.754 Namespace Sharing Capabilities: Private 00:07:31.754 Size (in LBAs): 1310720 (5GiB) 00:07:31.754 Capacity (in LBAs): 1310720 (5GiB) 00:07:31.754 Utilization (in LBAs): 1310720 (5GiB) 00:07:31.754 Thin Provisioning: Not Supported 00:07:31.754 Per-NS Atomic Units: No 00:07:31.754 Maximum Single Source Range Length: 128 00:07:31.754 Maximum Copy Length: 128 00:07:31.754 Maximum Source Range Count: 128 00:07:31.754 NGUID/EUI64 Never Reused: No 00:07:31.754 Namespace Write Protected: No 00:07:31.754 Number of LBA Formats: 8 00:07:31.754 Current LBA Format: LBA Format #04 00:07:31.754 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.754 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.754 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.754 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.754 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.754 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.754 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.754 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.754 00:07:31.754 NVM Specific Namespace Data 00:07:31.754 =========================== 00:07:31.754 Logical Block Storage Tag Mask: 0 00:07:31.754 Protection Information Capabilities: 00:07:31.754 16b Guard Protection Information Storage Tag Support: No 00:07:31.754 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.754 Storage Tag Check Read Support: No 00:07:31.754 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.754 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.754 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.754 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.754 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.754 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.754 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.754 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.754 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:31.754 23:28:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:32.016 ===================================================== 00:07:32.016 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:32.016 ===================================================== 00:07:32.016 Controller Capabilities/Features 00:07:32.016 ================================ 00:07:32.016 Vendor ID: 1b36 00:07:32.016 Subsystem Vendor ID: 1af4 00:07:32.016 Serial Number: 12342 00:07:32.016 Model Number: QEMU NVMe Ctrl 00:07:32.016 Firmware Version: 8.0.0 00:07:32.016 Recommended Arb Burst: 6 00:07:32.016 IEEE OUI Identifier: 00 54 52 00:07:32.016 Multi-path I/O 00:07:32.016 May have multiple subsystem ports: No 00:07:32.016 May have multiple controllers: No 00:07:32.016 Associated with SR-IOV VF: No 00:07:32.016 Max Data Transfer Size: 524288 00:07:32.016 Max Number of Namespaces: 256 00:07:32.016 Max Number of I/O Queues: 64 00:07:32.016 NVMe Specification Version (VS): 1.4 00:07:32.016 NVMe Specification Version (Identify): 1.4 00:07:32.016 Maximum Queue Entries: 2048 00:07:32.016 Contiguous Queues Required: Yes 00:07:32.016 Arbitration Mechanisms Supported 00:07:32.016 Weighted Round Robin: Not Supported 00:07:32.016 Vendor Specific: Not Supported 00:07:32.016 Reset Timeout: 7500 ms 00:07:32.016 Doorbell Stride: 4 bytes 00:07:32.016 NVM Subsystem Reset: Not Supported 00:07:32.016 Command Sets Supported 00:07:32.016 NVM Command Set: Supported 00:07:32.016 Boot Partition: Not Supported 00:07:32.016 Memory Page Size Minimum: 4096 bytes 00:07:32.016 Memory Page Size Maximum: 65536 bytes 00:07:32.016 Persistent Memory Region: Not Supported 00:07:32.016 Optional Asynchronous Events Supported 00:07:32.016 Namespace Attribute Notices: Supported 00:07:32.016 Firmware Activation Notices: Not Supported 00:07:32.016 ANA Change Notices: Not Supported 00:07:32.016 PLE Aggregate Log Change Notices: Not Supported 00:07:32.016 LBA Status Info Alert Notices: Not Supported 00:07:32.016 EGE Aggregate Log Change Notices: Not Supported 00:07:32.016 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.016 Zone Descriptor Change Notices: Not Supported 00:07:32.016 Discovery Log Change Notices: Not Supported 00:07:32.016 Controller Attributes 00:07:32.016 128-bit Host Identifier: Not Supported 00:07:32.016 Non-Operational Permissive Mode: Not Supported 00:07:32.016 NVM Sets: Not Supported 00:07:32.016 Read Recovery Levels: Not Supported 00:07:32.016 Endurance Groups: Not Supported 00:07:32.016 Predictable Latency Mode: Not Supported 00:07:32.016 Traffic Based Keep ALive: Not Supported 00:07:32.016 Namespace Granularity: Not Supported 00:07:32.016 SQ Associations: Not Supported 00:07:32.016 UUID List: Not Supported 00:07:32.016 Multi-Domain Subsystem: Not Supported 00:07:32.016 Fixed Capacity Management: Not Supported 00:07:32.016 Variable Capacity Management: Not Supported 00:07:32.016 Delete Endurance Group: Not Supported 00:07:32.016 Delete NVM Set: Not Supported 00:07:32.016 Extended LBA Formats Supported: Supported 00:07:32.016 Flexible Data Placement Supported: Not Supported 00:07:32.016 00:07:32.016 Controller Memory Buffer Support 00:07:32.016 ================================ 00:07:32.016 Supported: No 00:07:32.016 00:07:32.016 Persistent Memory Region Support 00:07:32.016 ================================ 00:07:32.017 Supported: No 00:07:32.017 00:07:32.017 Admin Command Set Attributes 00:07:32.017 ============================ 00:07:32.017 Security Send/Receive: Not Supported 00:07:32.017 Format NVM: Supported 00:07:32.017 Firmware Activate/Download: Not Supported 00:07:32.017 Namespace Management: Supported 00:07:32.017 Device Self-Test: Not Supported 00:07:32.017 Directives: Supported 00:07:32.017 NVMe-MI: Not Supported 00:07:32.017 Virtualization Management: Not Supported 00:07:32.017 Doorbell Buffer Config: Supported 00:07:32.017 Get LBA Status Capability: Not Supported 00:07:32.017 Command & Feature Lockdown Capability: Not Supported 00:07:32.017 Abort Command Limit: 4 00:07:32.017 Async Event Request Limit: 4 00:07:32.017 Number of Firmware Slots: N/A 00:07:32.017 Firmware Slot 1 Read-Only: N/A 00:07:32.017 Firmware Activation Without Reset: N/A 00:07:32.017 Multiple Update Detection Support: N/A 00:07:32.017 Firmware Update Granularity: No Information Provided 00:07:32.017 Per-Namespace SMART Log: Yes 00:07:32.017 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.017 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:32.017 Command Effects Log Page: Supported 00:07:32.017 Get Log Page Extended Data: Supported 00:07:32.017 Telemetry Log Pages: Not Supported 00:07:32.017 Persistent Event Log Pages: Not Supported 00:07:32.017 Supported Log Pages Log Page: May Support 00:07:32.017 Commands Supported & Effects Log Page: Not Supported 00:07:32.017 Feature Identifiers & Effects Log Page:May Support 00:07:32.017 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.017 Data Area 4 for Telemetry Log: Not Supported 00:07:32.017 Error Log Page Entries Supported: 1 00:07:32.017 Keep Alive: Not Supported 00:07:32.017 00:07:32.017 NVM Command Set Attributes 00:07:32.017 ========================== 00:07:32.017 Submission Queue Entry Size 00:07:32.017 Max: 64 00:07:32.017 Min: 64 00:07:32.017 Completion Queue Entry Size 00:07:32.017 Max: 16 00:07:32.017 Min: 16 00:07:32.017 Number of Namespaces: 256 00:07:32.017 Compare Command: Supported 00:07:32.017 Write Uncorrectable Command: Not Supported 00:07:32.017 Dataset Management Command: Supported 00:07:32.017 Write Zeroes Command: Supported 00:07:32.017 Set Features Save Field: Supported 00:07:32.017 Reservations: Not Supported 00:07:32.017 Timestamp: Supported 00:07:32.017 Copy: Supported 00:07:32.017 Volatile Write Cache: Present 00:07:32.017 Atomic Write Unit (Normal): 1 00:07:32.017 Atomic Write Unit (PFail): 1 00:07:32.017 Atomic Compare & Write Unit: 1 00:07:32.017 Fused Compare & Write: Not Supported 00:07:32.017 Scatter-Gather List 00:07:32.017 SGL Command Set: Supported 00:07:32.017 SGL Keyed: Not Supported 00:07:32.017 SGL Bit Bucket Descriptor: Not Supported 00:07:32.017 SGL Metadata Pointer: Not Supported 00:07:32.017 Oversized SGL: Not Supported 00:07:32.017 SGL Metadata Address: Not Supported 00:07:32.017 SGL Offset: Not Supported 00:07:32.017 Transport SGL Data Block: Not Supported 00:07:32.017 Replay Protected Memory Block: Not Supported 00:07:32.017 00:07:32.017 Firmware Slot Information 00:07:32.017 ========================= 00:07:32.017 Active slot: 1 00:07:32.017 Slot 1 Firmware Revision: 1.0 00:07:32.017 00:07:32.017 00:07:32.017 Commands Supported and Effects 00:07:32.017 ============================== 00:07:32.017 Admin Commands 00:07:32.017 -------------- 00:07:32.017 Delete I/O Submission Queue (00h): Supported 00:07:32.017 Create I/O Submission Queue (01h): Supported 00:07:32.017 Get Log Page (02h): Supported 00:07:32.017 Delete I/O Completion Queue (04h): Supported 00:07:32.017 Create I/O Completion Queue (05h): Supported 00:07:32.017 Identify (06h): Supported 00:07:32.017 Abort (08h): Supported 00:07:32.017 Set Features (09h): Supported 00:07:32.017 Get Features (0Ah): Supported 00:07:32.017 Asynchronous Event Request (0Ch): Supported 00:07:32.017 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.017 Directive Send (19h): Supported 00:07:32.017 Directive Receive (1Ah): Supported 00:07:32.017 Virtualization Management (1Ch): Supported 00:07:32.017 Doorbell Buffer Config (7Ch): Supported 00:07:32.017 Format NVM (80h): Supported LBA-Change 00:07:32.017 I/O Commands 00:07:32.017 ------------ 00:07:32.017 Flush (00h): Supported LBA-Change 00:07:32.017 Write (01h): Supported LBA-Change 00:07:32.017 Read (02h): Supported 00:07:32.017 Compare (05h): Supported 00:07:32.017 Write Zeroes (08h): Supported LBA-Change 00:07:32.017 Dataset Management (09h): Supported LBA-Change 00:07:32.017 Unknown (0Ch): Supported 00:07:32.017 Unknown (12h): Supported 00:07:32.017 Copy (19h): Supported LBA-Change 00:07:32.017 Unknown (1Dh): Supported LBA-Change 00:07:32.017 00:07:32.017 Error Log 00:07:32.017 ========= 00:07:32.017 00:07:32.017 Arbitration 00:07:32.017 =========== 00:07:32.017 Arbitration Burst: no limit 00:07:32.017 00:07:32.017 Power Management 00:07:32.017 ================ 00:07:32.017 Number of Power States: 1 00:07:32.017 Current Power State: Power State #0 00:07:32.017 Power State #0: 00:07:32.017 Max Power: 25.00 W 00:07:32.017 Non-Operational State: Operational 00:07:32.017 Entry Latency: 16 microseconds 00:07:32.017 Exit Latency: 4 microseconds 00:07:32.017 Relative Read Throughput: 0 00:07:32.017 Relative Read Latency: 0 00:07:32.017 Relative Write Throughput: 0 00:07:32.017 Relative Write Latency: 0 00:07:32.017 Idle Power: Not Reported 00:07:32.017 Active Power: Not Reported 00:07:32.017 Non-Operational Permissive Mode: Not Supported 00:07:32.017 00:07:32.017 Health Information 00:07:32.017 ================== 00:07:32.017 Critical Warnings: 00:07:32.017 Available Spare Space: OK 00:07:32.017 Temperature: OK 00:07:32.017 Device Reliability: OK 00:07:32.017 Read Only: No 00:07:32.017 Volatile Memory Backup: OK 00:07:32.017 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.017 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.017 Available Spare: 0% 00:07:32.017 Available Spare Threshold: 0% 00:07:32.017 Life Percentage Used: 0% 00:07:32.017 Data Units Read: 2355 00:07:32.017 Data Units Written: 2142 00:07:32.017 Host Read Commands: 119375 00:07:32.017 Host Write Commands: 117645 00:07:32.017 Controller Busy Time: 0 minutes 00:07:32.017 Power Cycles: 0 00:07:32.017 Power On Hours: 0 hours 00:07:32.017 Unsafe Shutdowns: 0 00:07:32.017 Unrecoverable Media Errors: 0 00:07:32.017 Lifetime Error Log Entries: 0 00:07:32.017 Warning Temperature Time: 0 minutes 00:07:32.017 Critical Temperature Time: 0 minutes 00:07:32.017 00:07:32.017 Number of Queues 00:07:32.017 ================ 00:07:32.017 Number of I/O Submission Queues: 64 00:07:32.017 Number of I/O Completion Queues: 64 00:07:32.017 00:07:32.017 ZNS Specific Controller Data 00:07:32.017 ============================ 00:07:32.017 Zone Append Size Limit: 0 00:07:32.017 00:07:32.017 00:07:32.017 Active Namespaces 00:07:32.017 ================= 00:07:32.017 Namespace ID:1 00:07:32.017 Error Recovery Timeout: Unlimited 00:07:32.017 Command Set Identifier: NVM (00h) 00:07:32.017 Deallocate: Supported 00:07:32.017 Deallocated/Unwritten Error: Supported 00:07:32.017 Deallocated Read Value: All 0x00 00:07:32.017 Deallocate in Write Zeroes: Not Supported 00:07:32.017 Deallocated Guard Field: 0xFFFF 00:07:32.017 Flush: Supported 00:07:32.017 Reservation: Not Supported 00:07:32.017 Namespace Sharing Capabilities: Private 00:07:32.017 Size (in LBAs): 1048576 (4GiB) 00:07:32.017 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.017 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.017 Thin Provisioning: Not Supported 00:07:32.017 Per-NS Atomic Units: No 00:07:32.017 Maximum Single Source Range Length: 128 00:07:32.017 Maximum Copy Length: 128 00:07:32.017 Maximum Source Range Count: 128 00:07:32.017 NGUID/EUI64 Never Reused: No 00:07:32.017 Namespace Write Protected: No 00:07:32.017 Number of LBA Formats: 8 00:07:32.017 Current LBA Format: LBA Format #04 00:07:32.017 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.017 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.017 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.017 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.017 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.017 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.017 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.017 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.017 00:07:32.017 NVM Specific Namespace Data 00:07:32.017 =========================== 00:07:32.017 Logical Block Storage Tag Mask: 0 00:07:32.017 Protection Information Capabilities: 00:07:32.017 16b Guard Protection Information Storage Tag Support: No 00:07:32.017 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.017 Storage Tag Check Read Support: No 00:07:32.017 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Namespace ID:2 00:07:32.017 Error Recovery Timeout: Unlimited 00:07:32.017 Command Set Identifier: NVM (00h) 00:07:32.017 Deallocate: Supported 00:07:32.017 Deallocated/Unwritten Error: Supported 00:07:32.017 Deallocated Read Value: All 0x00 00:07:32.017 Deallocate in Write Zeroes: Not Supported 00:07:32.017 Deallocated Guard Field: 0xFFFF 00:07:32.017 Flush: Supported 00:07:32.017 Reservation: Not Supported 00:07:32.017 Namespace Sharing Capabilities: Private 00:07:32.017 Size (in LBAs): 1048576 (4GiB) 00:07:32.017 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.017 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.017 Thin Provisioning: Not Supported 00:07:32.017 Per-NS Atomic Units: No 00:07:32.017 Maximum Single Source Range Length: 128 00:07:32.017 Maximum Copy Length: 128 00:07:32.017 Maximum Source Range Count: 128 00:07:32.017 NGUID/EUI64 Never Reused: No 00:07:32.017 Namespace Write Protected: No 00:07:32.017 Number of LBA Formats: 8 00:07:32.017 Current LBA Format: LBA Format #04 00:07:32.017 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.017 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.017 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.017 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.017 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.017 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.017 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.017 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.017 00:07:32.017 NVM Specific Namespace Data 00:07:32.017 =========================== 00:07:32.017 Logical Block Storage Tag Mask: 0 00:07:32.017 Protection Information Capabilities: 00:07:32.017 16b Guard Protection Information Storage Tag Support: No 00:07:32.017 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.017 Storage Tag Check Read Support: No 00:07:32.017 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.017 Namespace ID:3 00:07:32.017 Error Recovery Timeout: Unlimited 00:07:32.017 Command Set Identifier: NVM (00h) 00:07:32.017 Deallocate: Supported 00:07:32.017 Deallocated/Unwritten Error: Supported 00:07:32.017 Deallocated Read Value: All 0x00 00:07:32.017 Deallocate in Write Zeroes: Not Supported 00:07:32.017 Deallocated Guard Field: 0xFFFF 00:07:32.017 Flush: Supported 00:07:32.017 Reservation: Not Supported 00:07:32.017 Namespace Sharing Capabilities: Private 00:07:32.017 Size (in LBAs): 1048576 (4GiB) 00:07:32.017 Capacity (in LBAs): 1048576 (4GiB) 00:07:32.017 Utilization (in LBAs): 1048576 (4GiB) 00:07:32.017 Thin Provisioning: Not Supported 00:07:32.017 Per-NS Atomic Units: No 00:07:32.017 Maximum Single Source Range Length: 128 00:07:32.017 Maximum Copy Length: 128 00:07:32.017 Maximum Source Range Count: 128 00:07:32.017 NGUID/EUI64 Never Reused: No 00:07:32.017 Namespace Write Protected: No 00:07:32.017 Number of LBA Formats: 8 00:07:32.017 Current LBA Format: LBA Format #04 00:07:32.017 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.017 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.017 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.017 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.017 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.017 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.018 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.018 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.018 00:07:32.018 NVM Specific Namespace Data 00:07:32.018 =========================== 00:07:32.018 Logical Block Storage Tag Mask: 0 00:07:32.018 Protection Information Capabilities: 00:07:32.018 16b Guard Protection Information Storage Tag Support: No 00:07:32.018 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.018 Storage Tag Check Read Support: No 00:07:32.018 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.018 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.018 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.018 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.018 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.018 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.018 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.018 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.018 23:28:20 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:32.018 23:28:20 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:32.292 ===================================================== 00:07:32.292 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:32.292 ===================================================== 00:07:32.292 Controller Capabilities/Features 00:07:32.292 ================================ 00:07:32.292 Vendor ID: 1b36 00:07:32.292 Subsystem Vendor ID: 1af4 00:07:32.292 Serial Number: 12343 00:07:32.292 Model Number: QEMU NVMe Ctrl 00:07:32.292 Firmware Version: 8.0.0 00:07:32.292 Recommended Arb Burst: 6 00:07:32.292 IEEE OUI Identifier: 00 54 52 00:07:32.292 Multi-path I/O 00:07:32.292 May have multiple subsystem ports: No 00:07:32.292 May have multiple controllers: Yes 00:07:32.292 Associated with SR-IOV VF: No 00:07:32.292 Max Data Transfer Size: 524288 00:07:32.292 Max Number of Namespaces: 256 00:07:32.292 Max Number of I/O Queues: 64 00:07:32.292 NVMe Specification Version (VS): 1.4 00:07:32.292 NVMe Specification Version (Identify): 1.4 00:07:32.292 Maximum Queue Entries: 2048 00:07:32.292 Contiguous Queues Required: Yes 00:07:32.292 Arbitration Mechanisms Supported 00:07:32.292 Weighted Round Robin: Not Supported 00:07:32.292 Vendor Specific: Not Supported 00:07:32.292 Reset Timeout: 7500 ms 00:07:32.292 Doorbell Stride: 4 bytes 00:07:32.292 NVM Subsystem Reset: Not Supported 00:07:32.292 Command Sets Supported 00:07:32.292 NVM Command Set: Supported 00:07:32.292 Boot Partition: Not Supported 00:07:32.292 Memory Page Size Minimum: 4096 bytes 00:07:32.292 Memory Page Size Maximum: 65536 bytes 00:07:32.292 Persistent Memory Region: Not Supported 00:07:32.292 Optional Asynchronous Events Supported 00:07:32.292 Namespace Attribute Notices: Supported 00:07:32.292 Firmware Activation Notices: Not Supported 00:07:32.292 ANA Change Notices: Not Supported 00:07:32.292 PLE Aggregate Log Change Notices: Not Supported 00:07:32.292 LBA Status Info Alert Notices: Not Supported 00:07:32.292 EGE Aggregate Log Change Notices: Not Supported 00:07:32.292 Normal NVM Subsystem Shutdown event: Not Supported 00:07:32.292 Zone Descriptor Change Notices: Not Supported 00:07:32.292 Discovery Log Change Notices: Not Supported 00:07:32.292 Controller Attributes 00:07:32.292 128-bit Host Identifier: Not Supported 00:07:32.292 Non-Operational Permissive Mode: Not Supported 00:07:32.292 NVM Sets: Not Supported 00:07:32.292 Read Recovery Levels: Not Supported 00:07:32.292 Endurance Groups: Supported 00:07:32.292 Predictable Latency Mode: Not Supported 00:07:32.292 Traffic Based Keep ALive: Not Supported 00:07:32.292 Namespace Granularity: Not Supported 00:07:32.292 SQ Associations: Not Supported 00:07:32.292 UUID List: Not Supported 00:07:32.292 Multi-Domain Subsystem: Not Supported 00:07:32.292 Fixed Capacity Management: Not Supported 00:07:32.292 Variable Capacity Management: Not Supported 00:07:32.292 Delete Endurance Group: Not Supported 00:07:32.292 Delete NVM Set: Not Supported 00:07:32.292 Extended LBA Formats Supported: Supported 00:07:32.292 Flexible Data Placement Supported: Supported 00:07:32.292 00:07:32.292 Controller Memory Buffer Support 00:07:32.292 ================================ 00:07:32.292 Supported: No 00:07:32.292 00:07:32.292 Persistent Memory Region Support 00:07:32.292 ================================ 00:07:32.292 Supported: No 00:07:32.292 00:07:32.292 Admin Command Set Attributes 00:07:32.292 ============================ 00:07:32.292 Security Send/Receive: Not Supported 00:07:32.292 Format NVM: Supported 00:07:32.292 Firmware Activate/Download: Not Supported 00:07:32.292 Namespace Management: Supported 00:07:32.292 Device Self-Test: Not Supported 00:07:32.292 Directives: Supported 00:07:32.292 NVMe-MI: Not Supported 00:07:32.292 Virtualization Management: Not Supported 00:07:32.292 Doorbell Buffer Config: Supported 00:07:32.292 Get LBA Status Capability: Not Supported 00:07:32.292 Command & Feature Lockdown Capability: Not Supported 00:07:32.292 Abort Command Limit: 4 00:07:32.292 Async Event Request Limit: 4 00:07:32.292 Number of Firmware Slots: N/A 00:07:32.292 Firmware Slot 1 Read-Only: N/A 00:07:32.292 Firmware Activation Without Reset: N/A 00:07:32.292 Multiple Update Detection Support: N/A 00:07:32.292 Firmware Update Granularity: No Information Provided 00:07:32.292 Per-Namespace SMART Log: Yes 00:07:32.292 Asymmetric Namespace Access Log Page: Not Supported 00:07:32.292 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:32.292 Command Effects Log Page: Supported 00:07:32.292 Get Log Page Extended Data: Supported 00:07:32.292 Telemetry Log Pages: Not Supported 00:07:32.292 Persistent Event Log Pages: Not Supported 00:07:32.292 Supported Log Pages Log Page: May Support 00:07:32.292 Commands Supported & Effects Log Page: Not Supported 00:07:32.292 Feature Identifiers & Effects Log Page:May Support 00:07:32.292 NVMe-MI Commands & Effects Log Page: May Support 00:07:32.292 Data Area 4 for Telemetry Log: Not Supported 00:07:32.292 Error Log Page Entries Supported: 1 00:07:32.292 Keep Alive: Not Supported 00:07:32.292 00:07:32.292 NVM Command Set Attributes 00:07:32.292 ========================== 00:07:32.292 Submission Queue Entry Size 00:07:32.292 Max: 64 00:07:32.292 Min: 64 00:07:32.292 Completion Queue Entry Size 00:07:32.292 Max: 16 00:07:32.292 Min: 16 00:07:32.292 Number of Namespaces: 256 00:07:32.292 Compare Command: Supported 00:07:32.292 Write Uncorrectable Command: Not Supported 00:07:32.292 Dataset Management Command: Supported 00:07:32.292 Write Zeroes Command: Supported 00:07:32.292 Set Features Save Field: Supported 00:07:32.292 Reservations: Not Supported 00:07:32.292 Timestamp: Supported 00:07:32.292 Copy: Supported 00:07:32.292 Volatile Write Cache: Present 00:07:32.292 Atomic Write Unit (Normal): 1 00:07:32.292 Atomic Write Unit (PFail): 1 00:07:32.292 Atomic Compare & Write Unit: 1 00:07:32.292 Fused Compare & Write: Not Supported 00:07:32.293 Scatter-Gather List 00:07:32.293 SGL Command Set: Supported 00:07:32.293 SGL Keyed: Not Supported 00:07:32.293 SGL Bit Bucket Descriptor: Not Supported 00:07:32.293 SGL Metadata Pointer: Not Supported 00:07:32.293 Oversized SGL: Not Supported 00:07:32.293 SGL Metadata Address: Not Supported 00:07:32.293 SGL Offset: Not Supported 00:07:32.293 Transport SGL Data Block: Not Supported 00:07:32.293 Replay Protected Memory Block: Not Supported 00:07:32.293 00:07:32.293 Firmware Slot Information 00:07:32.293 ========================= 00:07:32.293 Active slot: 1 00:07:32.293 Slot 1 Firmware Revision: 1.0 00:07:32.293 00:07:32.293 00:07:32.293 Commands Supported and Effects 00:07:32.293 ============================== 00:07:32.293 Admin Commands 00:07:32.293 -------------- 00:07:32.293 Delete I/O Submission Queue (00h): Supported 00:07:32.293 Create I/O Submission Queue (01h): Supported 00:07:32.293 Get Log Page (02h): Supported 00:07:32.293 Delete I/O Completion Queue (04h): Supported 00:07:32.293 Create I/O Completion Queue (05h): Supported 00:07:32.293 Identify (06h): Supported 00:07:32.293 Abort (08h): Supported 00:07:32.293 Set Features (09h): Supported 00:07:32.293 Get Features (0Ah): Supported 00:07:32.293 Asynchronous Event Request (0Ch): Supported 00:07:32.293 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:32.293 Directive Send (19h): Supported 00:07:32.293 Directive Receive (1Ah): Supported 00:07:32.293 Virtualization Management (1Ch): Supported 00:07:32.293 Doorbell Buffer Config (7Ch): Supported 00:07:32.293 Format NVM (80h): Supported LBA-Change 00:07:32.293 I/O Commands 00:07:32.293 ------------ 00:07:32.293 Flush (00h): Supported LBA-Change 00:07:32.293 Write (01h): Supported LBA-Change 00:07:32.293 Read (02h): Supported 00:07:32.293 Compare (05h): Supported 00:07:32.293 Write Zeroes (08h): Supported LBA-Change 00:07:32.293 Dataset Management (09h): Supported LBA-Change 00:07:32.293 Unknown (0Ch): Supported 00:07:32.293 Unknown (12h): Supported 00:07:32.293 Copy (19h): Supported LBA-Change 00:07:32.293 Unknown (1Dh): Supported LBA-Change 00:07:32.293 00:07:32.293 Error Log 00:07:32.293 ========= 00:07:32.293 00:07:32.293 Arbitration 00:07:32.293 =========== 00:07:32.293 Arbitration Burst: no limit 00:07:32.293 00:07:32.293 Power Management 00:07:32.293 ================ 00:07:32.293 Number of Power States: 1 00:07:32.293 Current Power State: Power State #0 00:07:32.293 Power State #0: 00:07:32.293 Max Power: 25.00 W 00:07:32.293 Non-Operational State: Operational 00:07:32.293 Entry Latency: 16 microseconds 00:07:32.293 Exit Latency: 4 microseconds 00:07:32.293 Relative Read Throughput: 0 00:07:32.293 Relative Read Latency: 0 00:07:32.293 Relative Write Throughput: 0 00:07:32.293 Relative Write Latency: 0 00:07:32.293 Idle Power: Not Reported 00:07:32.293 Active Power: Not Reported 00:07:32.293 Non-Operational Permissive Mode: Not Supported 00:07:32.293 00:07:32.293 Health Information 00:07:32.293 ================== 00:07:32.293 Critical Warnings: 00:07:32.293 Available Spare Space: OK 00:07:32.293 Temperature: OK 00:07:32.293 Device Reliability: OK 00:07:32.293 Read Only: No 00:07:32.293 Volatile Memory Backup: OK 00:07:32.293 Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.293 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:32.293 Available Spare: 0% 00:07:32.293 Available Spare Threshold: 0% 00:07:32.293 Life Percentage Used: 0% 00:07:32.293 Data Units Read: 1125 00:07:32.293 Data Units Written: 1054 00:07:32.293 Host Read Commands: 42588 00:07:32.293 Host Write Commands: 42011 00:07:32.293 Controller Busy Time: 0 minutes 00:07:32.293 Power Cycles: 0 00:07:32.293 Power On Hours: 0 hours 00:07:32.293 Unsafe Shutdowns: 0 00:07:32.293 Unrecoverable Media Errors: 0 00:07:32.293 Lifetime Error Log Entries: 0 00:07:32.293 Warning Temperature Time: 0 minutes 00:07:32.293 Critical Temperature Time: 0 minutes 00:07:32.293 00:07:32.293 Number of Queues 00:07:32.293 ================ 00:07:32.293 Number of I/O Submission Queues: 64 00:07:32.293 Number of I/O Completion Queues: 64 00:07:32.293 00:07:32.293 ZNS Specific Controller Data 00:07:32.293 ============================ 00:07:32.293 Zone Append Size Limit: 0 00:07:32.293 00:07:32.293 00:07:32.293 Active Namespaces 00:07:32.293 ================= 00:07:32.293 Namespace ID:1 00:07:32.293 Error Recovery Timeout: Unlimited 00:07:32.293 Command Set Identifier: NVM (00h) 00:07:32.293 Deallocate: Supported 00:07:32.293 Deallocated/Unwritten Error: Supported 00:07:32.293 Deallocated Read Value: All 0x00 00:07:32.293 Deallocate in Write Zeroes: Not Supported 00:07:32.293 Deallocated Guard Field: 0xFFFF 00:07:32.293 Flush: Supported 00:07:32.293 Reservation: Not Supported 00:07:32.293 Namespace Sharing Capabilities: Multiple Controllers 00:07:32.293 Size (in LBAs): 262144 (1GiB) 00:07:32.293 Capacity (in LBAs): 262144 (1GiB) 00:07:32.293 Utilization (in LBAs): 262144 (1GiB) 00:07:32.293 Thin Provisioning: Not Supported 00:07:32.293 Per-NS Atomic Units: No 00:07:32.293 Maximum Single Source Range Length: 128 00:07:32.293 Maximum Copy Length: 128 00:07:32.293 Maximum Source Range Count: 128 00:07:32.293 NGUID/EUI64 Never Reused: No 00:07:32.293 Namespace Write Protected: No 00:07:32.293 Endurance group ID: 1 00:07:32.293 Number of LBA Formats: 8 00:07:32.293 Current LBA Format: LBA Format #04 00:07:32.293 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:32.293 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:32.293 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:32.293 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:32.293 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:32.293 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:32.293 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:32.293 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:32.293 00:07:32.293 Get Feature FDP: 00:07:32.293 ================ 00:07:32.293 Enabled: Yes 00:07:32.293 FDP configuration index: 0 00:07:32.293 00:07:32.293 FDP configurations log page 00:07:32.293 =========================== 00:07:32.293 Number of FDP configurations: 1 00:07:32.293 Version: 0 00:07:32.293 Size: 112 00:07:32.293 FDP Configuration Descriptor: 0 00:07:32.293 Descriptor Size: 96 00:07:32.293 Reclaim Group Identifier format: 2 00:07:32.293 FDP Volatile Write Cache: Not Present 00:07:32.293 FDP Configuration: Valid 00:07:32.293 Vendor Specific Size: 0 00:07:32.293 Number of Reclaim Groups: 2 00:07:32.293 Number of Recalim Unit Handles: 8 00:07:32.293 Max Placement Identifiers: 128 00:07:32.293 Number of Namespaces Suppprted: 256 00:07:32.293 Reclaim unit Nominal Size: 6000000 bytes 00:07:32.293 Estimated Reclaim Unit Time Limit: Not Reported 00:07:32.293 RUH Desc #000: RUH Type: Initially Isolated 00:07:32.293 RUH Desc #001: RUH Type: Initially Isolated 00:07:32.293 RUH Desc #002: RUH Type: Initially Isolated 00:07:32.293 RUH Desc #003: RUH Type: Initially Isolated 00:07:32.293 RUH Desc #004: RUH Type: Initially Isolated 00:07:32.293 RUH Desc #005: RUH Type: Initially Isolated 00:07:32.293 RUH Desc #006: RUH Type: Initially Isolated 00:07:32.293 RUH Desc #007: RUH Type: Initially Isolated 00:07:32.293 00:07:32.293 FDP reclaim unit handle usage log page 00:07:32.293 ====================================== 00:07:32.293 Number of Reclaim Unit Handles: 8 00:07:32.293 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:32.293 RUH Usage Desc #001: RUH Attributes: Unused 00:07:32.293 RUH Usage Desc #002: RUH Attributes: Unused 00:07:32.293 RUH Usage Desc #003: RUH Attributes: Unused 00:07:32.293 RUH Usage Desc #004: RUH Attributes: Unused 00:07:32.293 RUH Usage Desc #005: RUH Attributes: Unused 00:07:32.293 RUH Usage Desc #006: RUH Attributes: Unused 00:07:32.293 RUH Usage Desc #007: RUH Attributes: Unused 00:07:32.293 00:07:32.293 FDP statistics log page 00:07:32.293 ======================= 00:07:32.293 Host bytes with metadata written: 635215872 00:07:32.293 Media bytes with metadata written: 635277312 00:07:32.293 Media bytes erased: 0 00:07:32.293 00:07:32.293 FDP events log page 00:07:32.293 =================== 00:07:32.293 Number of FDP events: 0 00:07:32.293 00:07:32.293 NVM Specific Namespace Data 00:07:32.293 =========================== 00:07:32.293 Logical Block Storage Tag Mask: 0 00:07:32.293 Protection Information Capabilities: 00:07:32.293 16b Guard Protection Information Storage Tag Support: No 00:07:32.293 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:32.293 Storage Tag Check Read Support: No 00:07:32.293 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.293 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.293 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.293 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.293 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.293 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.293 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.293 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:32.293 00:07:32.293 real 0m1.229s 00:07:32.293 user 0m0.407s 00:07:32.293 sys 0m0.574s 00:07:32.293 23:28:20 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.293 ************************************ 00:07:32.293 END TEST nvme_identify 00:07:32.293 ************************************ 00:07:32.293 23:28:20 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:32.293 23:28:20 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:32.293 23:28:20 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.293 23:28:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.293 23:28:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.293 ************************************ 00:07:32.293 START TEST nvme_perf 00:07:32.293 ************************************ 00:07:32.293 23:28:20 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:32.293 23:28:20 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:33.676 Initializing NVMe Controllers 00:07:33.676 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:33.676 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:33.676 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:33.676 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:33.676 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:33.676 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:33.676 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:33.676 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:33.676 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:33.676 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:33.676 Initialization complete. Launching workers. 00:07:33.676 ======================================================== 00:07:33.676 Latency(us) 00:07:33.676 Device Information : IOPS MiB/s Average min max 00:07:33.676 PCIE (0000:00:10.0) NSID 1 from core 0: 19064.97 223.42 6722.42 5468.01 39144.50 00:07:33.676 PCIE (0000:00:11.0) NSID 1 from core 0: 19064.97 223.42 6713.01 5571.32 37429.55 00:07:33.676 PCIE (0000:00:13.0) NSID 1 from core 0: 19064.97 223.42 6702.64 5550.13 36479.24 00:07:33.676 PCIE (0000:00:12.0) NSID 1 from core 0: 19064.97 223.42 6692.05 5535.16 34878.87 00:07:33.676 PCIE (0000:00:12.0) NSID 2 from core 0: 19064.97 223.42 6681.46 5559.36 33203.90 00:07:33.676 PCIE (0000:00:12.0) NSID 3 from core 0: 19128.94 224.17 6648.29 5556.96 26823.19 00:07:33.676 ======================================================== 00:07:33.676 Total : 114453.77 1341.26 6693.29 5468.01 39144.50 00:07:33.676 00:07:33.676 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:33.676 ================================================================================= 00:07:33.676 1.00000% : 5570.560us 00:07:33.676 10.00000% : 5721.797us 00:07:33.676 25.00000% : 5999.065us 00:07:33.676 50.00000% : 6301.538us 00:07:33.676 75.00000% : 6654.425us 00:07:33.676 90.00000% : 7208.960us 00:07:33.676 95.00000% : 9124.628us 00:07:33.676 98.00000% : 11241.945us 00:07:33.676 99.00000% : 14216.271us 00:07:33.676 99.50000% : 32667.175us 00:07:33.676 99.90000% : 38716.652us 00:07:33.676 99.99000% : 39119.951us 00:07:33.676 99.99900% : 39321.600us 00:07:33.676 99.99990% : 39321.600us 00:07:33.676 99.99999% : 39321.600us 00:07:33.676 00:07:33.676 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:33.677 ================================================================================= 00:07:33.677 1.00000% : 5646.178us 00:07:33.677 10.00000% : 5797.415us 00:07:33.677 25.00000% : 6024.271us 00:07:33.677 50.00000% : 6301.538us 00:07:33.677 75.00000% : 6654.425us 00:07:33.677 90.00000% : 7158.548us 00:07:33.677 95.00000% : 9074.215us 00:07:33.677 98.00000% : 10838.646us 00:07:33.677 99.00000% : 14317.095us 00:07:33.677 99.50000% : 31053.982us 00:07:33.677 99.90000% : 37103.458us 00:07:33.677 99.99000% : 37506.757us 00:07:33.677 99.99900% : 37506.757us 00:07:33.677 99.99990% : 37506.757us 00:07:33.677 99.99999% : 37506.757us 00:07:33.677 00:07:33.677 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:33.677 ================================================================================= 00:07:33.677 1.00000% : 5646.178us 00:07:33.677 10.00000% : 5772.209us 00:07:33.677 25.00000% : 6024.271us 00:07:33.677 50.00000% : 6276.332us 00:07:33.677 75.00000% : 6604.012us 00:07:33.677 90.00000% : 7158.548us 00:07:33.677 95.00000% : 9074.215us 00:07:33.677 98.00000% : 10989.883us 00:07:33.677 99.00000% : 14619.569us 00:07:33.677 99.50000% : 30045.735us 00:07:33.677 99.90000% : 36095.212us 00:07:33.677 99.99000% : 36498.511us 00:07:33.677 99.99900% : 36498.511us 00:07:33.677 99.99990% : 36498.511us 00:07:33.677 99.99999% : 36498.511us 00:07:33.677 00:07:33.677 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:33.677 ================================================================================= 00:07:33.677 1.00000% : 5646.178us 00:07:33.677 10.00000% : 5772.209us 00:07:33.677 25.00000% : 6024.271us 00:07:33.677 50.00000% : 6276.332us 00:07:33.677 75.00000% : 6604.012us 00:07:33.677 90.00000% : 7158.548us 00:07:33.677 95.00000% : 9124.628us 00:07:33.677 98.00000% : 11141.120us 00:07:33.677 99.00000% : 14518.745us 00:07:33.677 99.50000% : 28432.542us 00:07:33.677 99.90000% : 34482.018us 00:07:33.677 99.99000% : 34885.317us 00:07:33.677 99.99900% : 34885.317us 00:07:33.677 99.99990% : 34885.317us 00:07:33.677 99.99999% : 34885.317us 00:07:33.677 00:07:33.677 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:33.677 ================================================================================= 00:07:33.677 1.00000% : 5671.385us 00:07:33.677 10.00000% : 5772.209us 00:07:33.677 25.00000% : 6024.271us 00:07:33.677 50.00000% : 6276.332us 00:07:33.677 75.00000% : 6654.425us 00:07:33.677 90.00000% : 7208.960us 00:07:33.677 95.00000% : 9124.628us 00:07:33.677 98.00000% : 11342.769us 00:07:33.677 99.00000% : 14518.745us 00:07:33.677 99.50000% : 26819.348us 00:07:33.677 99.90000% : 32868.825us 00:07:33.677 99.99000% : 33272.123us 00:07:33.677 99.99900% : 33272.123us 00:07:33.677 99.99990% : 33272.123us 00:07:33.677 99.99999% : 33272.123us 00:07:33.677 00:07:33.677 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:33.677 ================================================================================= 00:07:33.677 1.00000% : 5671.385us 00:07:33.677 10.00000% : 5797.415us 00:07:33.677 25.00000% : 6024.271us 00:07:33.677 50.00000% : 6301.538us 00:07:33.677 75.00000% : 6654.425us 00:07:33.677 90.00000% : 7208.960us 00:07:33.677 95.00000% : 9175.040us 00:07:33.677 98.00000% : 11292.357us 00:07:33.677 99.00000% : 14216.271us 00:07:33.677 99.50000% : 20568.222us 00:07:33.677 99.90000% : 26416.049us 00:07:33.677 99.99000% : 26819.348us 00:07:33.677 99.99900% : 27020.997us 00:07:33.677 99.99990% : 27020.997us 00:07:33.677 99.99999% : 27020.997us 00:07:33.677 00:07:33.677 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:33.677 ============================================================================== 00:07:33.677 Range in us Cumulative IO count 00:07:33.677 5444.529 - 5469.735: 0.0052% ( 1) 00:07:33.677 5469.735 - 5494.942: 0.0315% ( 5) 00:07:33.677 5494.942 - 5520.148: 0.1888% ( 30) 00:07:33.677 5520.148 - 5545.354: 0.7288% ( 103) 00:07:33.677 5545.354 - 5570.560: 1.8456% ( 213) 00:07:33.677 5570.560 - 5595.766: 3.5235% ( 320) 00:07:33.677 5595.766 - 5620.972: 5.0388% ( 289) 00:07:33.677 5620.972 - 5646.178: 6.3496% ( 250) 00:07:33.677 5646.178 - 5671.385: 7.6237% ( 243) 00:07:33.677 5671.385 - 5696.591: 8.9188% ( 247) 00:07:33.677 5696.591 - 5721.797: 10.0042% ( 207) 00:07:33.677 5721.797 - 5747.003: 11.1000% ( 209) 00:07:33.677 5747.003 - 5772.209: 12.3742% ( 243) 00:07:33.677 5772.209 - 5797.415: 13.5696% ( 228) 00:07:33.677 5797.415 - 5822.622: 14.7651% ( 228) 00:07:33.677 5822.622 - 5847.828: 15.9553% ( 227) 00:07:33.677 5847.828 - 5873.034: 17.3238% ( 261) 00:07:33.677 5873.034 - 5898.240: 18.7710% ( 276) 00:07:33.677 5898.240 - 5923.446: 20.5117% ( 332) 00:07:33.677 5923.446 - 5948.652: 22.1529% ( 313) 00:07:33.677 5948.652 - 5973.858: 24.1139% ( 374) 00:07:33.677 5973.858 - 5999.065: 26.2689% ( 411) 00:07:33.677 5999.065 - 6024.271: 28.1722% ( 363) 00:07:33.677 6024.271 - 6049.477: 30.2695% ( 400) 00:07:33.677 6049.477 - 6074.683: 32.1728% ( 363) 00:07:33.677 6074.683 - 6099.889: 34.3331% ( 412) 00:07:33.677 6099.889 - 6125.095: 36.3203% ( 379) 00:07:33.677 6125.095 - 6150.302: 38.2970% ( 377) 00:07:33.677 6150.302 - 6175.508: 40.2370% ( 370) 00:07:33.677 6175.508 - 6200.714: 42.2924% ( 392) 00:07:33.677 6200.714 - 6225.920: 44.4841% ( 418) 00:07:33.677 6225.920 - 6251.126: 46.5027% ( 385) 00:07:33.677 6251.126 - 6276.332: 48.7888% ( 436) 00:07:33.677 6276.332 - 6301.538: 50.9490% ( 412) 00:07:33.677 6301.538 - 6326.745: 53.1722% ( 424) 00:07:33.677 6326.745 - 6351.951: 55.3272% ( 411) 00:07:33.677 6351.951 - 6377.157: 57.4245% ( 400) 00:07:33.677 6377.157 - 6402.363: 59.9308% ( 478) 00:07:33.677 6402.363 - 6427.569: 62.0229% ( 399) 00:07:33.677 6427.569 - 6452.775: 64.3876% ( 451) 00:07:33.677 6452.775 - 6503.188: 68.4144% ( 768) 00:07:33.677 6503.188 - 6553.600: 71.3874% ( 567) 00:07:33.677 6553.600 - 6604.012: 73.8098% ( 462) 00:07:33.677 6604.012 - 6654.425: 76.0487% ( 427) 00:07:33.677 6654.425 - 6704.837: 78.1879% ( 408) 00:07:33.677 6704.837 - 6755.249: 80.3377% ( 410) 00:07:33.677 6755.249 - 6805.662: 82.3563% ( 385) 00:07:33.677 6805.662 - 6856.074: 84.2754% ( 366) 00:07:33.677 6856.074 - 6906.486: 85.9532% ( 320) 00:07:33.677 6906.486 - 6956.898: 87.0805% ( 215) 00:07:33.677 6956.898 - 7007.311: 87.9299% ( 162) 00:07:33.677 7007.311 - 7057.723: 88.5958% ( 127) 00:07:33.677 7057.723 - 7108.135: 89.2146% ( 118) 00:07:33.677 7108.135 - 7158.548: 89.7074% ( 94) 00:07:33.677 7158.548 - 7208.960: 90.1374% ( 82) 00:07:33.677 7208.960 - 7259.372: 90.4258% ( 55) 00:07:33.677 7259.372 - 7309.785: 90.5621% ( 26) 00:07:33.677 7309.785 - 7360.197: 90.7561% ( 37) 00:07:33.677 7360.197 - 7410.609: 90.8819% ( 24) 00:07:33.677 7410.609 - 7461.022: 90.9868% ( 20) 00:07:33.677 7461.022 - 7511.434: 91.0969% ( 21) 00:07:33.677 7511.434 - 7561.846: 91.2699% ( 33) 00:07:33.677 7561.846 - 7612.258: 91.4167% ( 28) 00:07:33.677 7612.258 - 7662.671: 91.5478% ( 25) 00:07:33.677 7662.671 - 7713.083: 91.6684% ( 23) 00:07:33.677 7713.083 - 7763.495: 91.8310% ( 31) 00:07:33.677 7763.495 - 7813.908: 91.9201% ( 17) 00:07:33.677 7813.908 - 7864.320: 92.0564% ( 26) 00:07:33.677 7864.320 - 7914.732: 92.1823% ( 24) 00:07:33.677 7914.732 - 7965.145: 92.2766% ( 18) 00:07:33.677 7965.145 - 8015.557: 92.3920% ( 22) 00:07:33.677 8015.557 - 8065.969: 92.5073% ( 22) 00:07:33.677 8065.969 - 8116.382: 92.6332% ( 24) 00:07:33.677 8116.382 - 8166.794: 92.7433% ( 21) 00:07:33.677 8166.794 - 8217.206: 92.8377% ( 18) 00:07:33.677 8217.206 - 8267.618: 92.9478% ( 21) 00:07:33.677 8267.618 - 8318.031: 93.0317% ( 16) 00:07:33.677 8318.031 - 8368.443: 93.1313% ( 19) 00:07:33.677 8368.443 - 8418.855: 93.2624% ( 25) 00:07:33.677 8418.855 - 8469.268: 93.4092% ( 28) 00:07:33.677 8469.268 - 8519.680: 93.5141% ( 20) 00:07:33.677 8519.680 - 8570.092: 93.6451% ( 25) 00:07:33.677 8570.092 - 8620.505: 93.7815% ( 26) 00:07:33.677 8620.505 - 8670.917: 93.9125% ( 25) 00:07:33.677 8670.917 - 8721.329: 94.0227% ( 21) 00:07:33.677 8721.329 - 8771.742: 94.1328% ( 21) 00:07:33.677 8771.742 - 8822.154: 94.2481% ( 22) 00:07:33.677 8822.154 - 8872.566: 94.3582% ( 21) 00:07:33.677 8872.566 - 8922.978: 94.4736% ( 22) 00:07:33.677 8922.978 - 8973.391: 94.6204% ( 28) 00:07:33.677 8973.391 - 9023.803: 94.7253% ( 20) 00:07:33.677 9023.803 - 9074.215: 94.8511% ( 24) 00:07:33.677 9074.215 - 9124.628: 95.0084% ( 30) 00:07:33.677 9124.628 - 9175.040: 95.1500% ( 27) 00:07:33.677 9175.040 - 9225.452: 95.3020% ( 29) 00:07:33.677 9225.452 - 9275.865: 95.4069% ( 20) 00:07:33.677 9275.865 - 9326.277: 95.5222% ( 22) 00:07:33.677 9326.277 - 9376.689: 95.6586% ( 26) 00:07:33.677 9376.689 - 9427.102: 95.8001% ( 27) 00:07:33.677 9427.102 - 9477.514: 95.8735% ( 14) 00:07:33.677 9477.514 - 9527.926: 96.0256% ( 29) 00:07:33.677 9527.926 - 9578.338: 96.2143% ( 36) 00:07:33.677 9578.338 - 9628.751: 96.3664% ( 29) 00:07:33.677 9628.751 - 9679.163: 96.4870% ( 23) 00:07:33.677 9679.163 - 9729.575: 96.6391% ( 29) 00:07:33.677 9729.575 - 9779.988: 96.7596% ( 23) 00:07:33.677 9779.988 - 9830.400: 96.8698% ( 21) 00:07:33.677 9830.400 - 9880.812: 97.0008% ( 25) 00:07:33.677 9880.812 - 9931.225: 97.0742% ( 14) 00:07:33.677 9931.225 - 9981.637: 97.1477% ( 14) 00:07:33.677 9981.637 - 10032.049: 97.2053% ( 11) 00:07:33.677 10032.049 - 10082.462: 97.2525% ( 9) 00:07:33.677 10082.462 - 10132.874: 97.2997% ( 9) 00:07:33.677 10132.874 - 10183.286: 97.3312% ( 6) 00:07:33.677 10183.286 - 10233.698: 97.3679% ( 7) 00:07:33.677 10233.698 - 10284.111: 97.4046% ( 7) 00:07:33.677 10284.111 - 10334.523: 97.4885% ( 16) 00:07:33.677 10334.523 - 10384.935: 97.5357% ( 9) 00:07:33.677 10384.935 - 10435.348: 97.5986% ( 12) 00:07:33.677 10435.348 - 10485.760: 97.6353% ( 7) 00:07:33.678 10485.760 - 10536.172: 97.6667% ( 6) 00:07:33.678 10536.172 - 10586.585: 97.6825% ( 3) 00:07:33.678 10586.585 - 10636.997: 97.7192% ( 7) 00:07:33.678 10636.997 - 10687.409: 97.7401% ( 4) 00:07:33.678 10687.409 - 10737.822: 97.7821% ( 8) 00:07:33.678 10737.822 - 10788.234: 97.7978% ( 3) 00:07:33.678 10788.234 - 10838.646: 97.8293% ( 6) 00:07:33.678 10838.646 - 10889.058: 97.8660% ( 7) 00:07:33.678 10889.058 - 10939.471: 97.8922% ( 5) 00:07:33.678 10939.471 - 10989.883: 97.9132% ( 4) 00:07:33.678 10989.883 - 11040.295: 97.9394% ( 5) 00:07:33.678 11040.295 - 11090.708: 97.9499% ( 2) 00:07:33.678 11090.708 - 11141.120: 97.9761% ( 5) 00:07:33.678 11141.120 - 11191.532: 97.9866% ( 2) 00:07:33.678 11191.532 - 11241.945: 98.0338% ( 9) 00:07:33.678 11241.945 - 11292.357: 98.0495% ( 3) 00:07:33.678 11292.357 - 11342.769: 98.0705% ( 4) 00:07:33.678 11342.769 - 11393.182: 98.0967% ( 5) 00:07:33.678 11393.182 - 11443.594: 98.1019% ( 1) 00:07:33.678 11443.594 - 11494.006: 98.1281% ( 5) 00:07:33.678 11494.006 - 11544.418: 98.1439% ( 3) 00:07:33.678 11544.418 - 11594.831: 98.1701% ( 5) 00:07:33.678 11594.831 - 11645.243: 98.1911% ( 4) 00:07:33.678 11645.243 - 11695.655: 98.2173% ( 5) 00:07:33.678 11695.655 - 11746.068: 98.2278% ( 2) 00:07:33.678 11746.068 - 11796.480: 98.2383% ( 2) 00:07:33.678 11796.480 - 11846.892: 98.2435% ( 1) 00:07:33.678 11846.892 - 11897.305: 98.2487% ( 1) 00:07:33.678 11897.305 - 11947.717: 98.2645% ( 3) 00:07:33.678 11947.717 - 11998.129: 98.2697% ( 1) 00:07:33.678 11998.129 - 12048.542: 98.2750% ( 1) 00:07:33.678 12048.542 - 12098.954: 98.2854% ( 2) 00:07:33.678 12149.366 - 12199.778: 98.3012% ( 3) 00:07:33.678 12199.778 - 12250.191: 98.3064% ( 1) 00:07:33.678 12250.191 - 12300.603: 98.3169% ( 2) 00:07:33.678 12300.603 - 12351.015: 98.3221% ( 1) 00:07:33.678 12401.428 - 12451.840: 98.3274% ( 1) 00:07:33.678 12451.840 - 12502.252: 98.3851% ( 11) 00:07:33.678 12502.252 - 12552.665: 98.4323% ( 9) 00:07:33.678 12552.665 - 12603.077: 98.4742% ( 8) 00:07:33.678 12603.077 - 12653.489: 98.4794% ( 1) 00:07:33.678 12653.489 - 12703.902: 98.4952% ( 3) 00:07:33.678 12703.902 - 12754.314: 98.5266% ( 6) 00:07:33.678 12754.314 - 12804.726: 98.5633% ( 7) 00:07:33.678 12804.726 - 12855.138: 98.5948% ( 6) 00:07:33.678 12855.138 - 12905.551: 98.6158% ( 4) 00:07:33.678 12905.551 - 13006.375: 98.6682% ( 10) 00:07:33.678 13006.375 - 13107.200: 98.7259% ( 11) 00:07:33.678 13107.200 - 13208.025: 98.7783% ( 10) 00:07:33.678 13208.025 - 13308.849: 98.8465% ( 13) 00:07:33.678 13308.849 - 13409.674: 98.8989% ( 10) 00:07:33.678 13409.674 - 13510.498: 98.9356% ( 7) 00:07:33.678 13510.498 - 13611.323: 98.9461% ( 2) 00:07:33.678 13611.323 - 13712.148: 98.9776% ( 6) 00:07:33.678 13712.148 - 13812.972: 98.9933% ( 3) 00:07:33.678 14115.446 - 14216.271: 99.0038% ( 2) 00:07:33.678 14216.271 - 14317.095: 99.0352% ( 6) 00:07:33.678 14317.095 - 14417.920: 99.0562% ( 4) 00:07:33.678 14417.920 - 14518.745: 99.0877% ( 6) 00:07:33.678 14518.745 - 14619.569: 99.1139% ( 5) 00:07:33.678 14619.569 - 14720.394: 99.1401% ( 5) 00:07:33.678 14720.394 - 14821.218: 99.1663% ( 5) 00:07:33.678 14821.218 - 14922.043: 99.1873% ( 4) 00:07:33.678 14922.043 - 15022.868: 99.2188% ( 6) 00:07:33.678 15022.868 - 15123.692: 99.2502% ( 6) 00:07:33.678 15123.692 - 15224.517: 99.2764% ( 5) 00:07:33.678 15224.517 - 15325.342: 99.3026% ( 5) 00:07:33.678 15325.342 - 15426.166: 99.3289% ( 5) 00:07:33.678 31658.929 - 31860.578: 99.3708% ( 8) 00:07:33.678 31860.578 - 32062.228: 99.4180% ( 9) 00:07:33.678 32062.228 - 32263.877: 99.4495% ( 6) 00:07:33.678 32263.877 - 32465.526: 99.4757% ( 5) 00:07:33.678 32465.526 - 32667.175: 99.5333% ( 11) 00:07:33.678 32667.175 - 32868.825: 99.5701% ( 7) 00:07:33.678 32868.825 - 33070.474: 99.6120% ( 8) 00:07:33.678 33070.474 - 33272.123: 99.6539% ( 8) 00:07:33.678 33272.123 - 33473.772: 99.6644% ( 2) 00:07:33.678 37305.108 - 37506.757: 99.6697% ( 1) 00:07:33.678 37506.757 - 37708.406: 99.7116% ( 8) 00:07:33.678 37708.406 - 37910.055: 99.7483% ( 7) 00:07:33.678 37910.055 - 38111.705: 99.7955% ( 9) 00:07:33.678 38111.705 - 38313.354: 99.8375% ( 8) 00:07:33.678 38313.354 - 38515.003: 99.8794% ( 8) 00:07:33.678 38515.003 - 38716.652: 99.9109% ( 6) 00:07:33.678 38716.652 - 38918.302: 99.9528% ( 8) 00:07:33.678 38918.302 - 39119.951: 99.9948% ( 8) 00:07:33.678 39119.951 - 39321.600: 100.0000% ( 1) 00:07:33.678 00:07:33.678 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:33.678 ============================================================================== 00:07:33.678 Range in us Cumulative IO count 00:07:33.678 5570.560 - 5595.766: 0.0944% ( 18) 00:07:33.678 5595.766 - 5620.972: 0.3775% ( 54) 00:07:33.678 5620.972 - 5646.178: 1.1902% ( 155) 00:07:33.678 5646.178 - 5671.385: 2.3123% ( 214) 00:07:33.678 5671.385 - 5696.591: 3.8800% ( 299) 00:07:33.678 5696.591 - 5721.797: 5.7729% ( 361) 00:07:33.678 5721.797 - 5747.003: 7.8020% ( 387) 00:07:33.678 5747.003 - 5772.209: 9.7682% ( 375) 00:07:33.678 5772.209 - 5797.415: 11.6873% ( 366) 00:07:33.678 5797.415 - 5822.622: 13.3914% ( 325) 00:07:33.678 5822.622 - 5847.828: 15.0640% ( 319) 00:07:33.678 5847.828 - 5873.034: 16.5268% ( 279) 00:07:33.678 5873.034 - 5898.240: 17.9216% ( 266) 00:07:33.678 5898.240 - 5923.446: 19.3582% ( 274) 00:07:33.678 5923.446 - 5948.652: 20.7057% ( 257) 00:07:33.678 5948.652 - 5973.858: 22.1844% ( 282) 00:07:33.678 5973.858 - 5999.065: 23.9618% ( 339) 00:07:33.678 5999.065 - 6024.271: 25.9333% ( 376) 00:07:33.678 6024.271 - 6049.477: 28.3557% ( 462) 00:07:33.678 6049.477 - 6074.683: 30.8096% ( 468) 00:07:33.678 6074.683 - 6099.889: 33.1166% ( 440) 00:07:33.678 6099.889 - 6125.095: 35.4656% ( 448) 00:07:33.678 6125.095 - 6150.302: 37.7779% ( 441) 00:07:33.678 6150.302 - 6175.508: 40.1951% ( 461) 00:07:33.678 6175.508 - 6200.714: 42.4182% ( 424) 00:07:33.678 6200.714 - 6225.920: 44.7620% ( 447) 00:07:33.678 6225.920 - 6251.126: 47.2211% ( 469) 00:07:33.678 6251.126 - 6276.332: 49.7378% ( 480) 00:07:33.678 6276.332 - 6301.538: 52.1130% ( 453) 00:07:33.678 6301.538 - 6326.745: 54.5564% ( 466) 00:07:33.678 6326.745 - 6351.951: 57.0522% ( 476) 00:07:33.678 6351.951 - 6377.157: 59.6109% ( 488) 00:07:33.678 6377.157 - 6402.363: 62.1539% ( 485) 00:07:33.678 6402.363 - 6427.569: 64.4505% ( 438) 00:07:33.678 6427.569 - 6452.775: 66.4167% ( 375) 00:07:33.678 6452.775 - 6503.188: 69.6466% ( 616) 00:07:33.678 6503.188 - 6553.600: 72.3469% ( 515) 00:07:33.678 6553.600 - 6604.012: 74.8899% ( 485) 00:07:33.678 6604.012 - 6654.425: 77.4276% ( 484) 00:07:33.678 6654.425 - 6704.837: 79.9339% ( 478) 00:07:33.678 6704.837 - 6755.249: 82.3144% ( 454) 00:07:33.678 6755.249 - 6805.662: 84.4012% ( 398) 00:07:33.678 6805.662 - 6856.074: 85.8903% ( 284) 00:07:33.678 6856.074 - 6906.486: 86.9232% ( 197) 00:07:33.678 6906.486 - 6956.898: 87.7831% ( 164) 00:07:33.678 6956.898 - 7007.311: 88.5015% ( 137) 00:07:33.678 7007.311 - 7057.723: 89.1516% ( 124) 00:07:33.678 7057.723 - 7108.135: 89.6497% ( 95) 00:07:33.678 7108.135 - 7158.548: 90.0325% ( 73) 00:07:33.678 7158.548 - 7208.960: 90.2580% ( 43) 00:07:33.678 7208.960 - 7259.372: 90.3995% ( 27) 00:07:33.678 7259.372 - 7309.785: 90.5149% ( 22) 00:07:33.678 7309.785 - 7360.197: 90.5988% ( 16) 00:07:33.678 7360.197 - 7410.609: 90.6512% ( 10) 00:07:33.678 7410.609 - 7461.022: 90.7194% ( 13) 00:07:33.678 7461.022 - 7511.434: 90.7771% ( 11) 00:07:33.678 7511.434 - 7561.846: 90.9291% ( 29) 00:07:33.678 7561.846 - 7612.258: 91.0445% ( 22) 00:07:33.678 7612.258 - 7662.671: 91.3224% ( 53) 00:07:33.678 7662.671 - 7713.083: 91.6265% ( 58) 00:07:33.678 7713.083 - 7763.495: 91.7418% ( 22) 00:07:33.678 7763.495 - 7813.908: 91.8467% ( 20) 00:07:33.678 7813.908 - 7864.320: 91.9358% ( 17) 00:07:33.678 7864.320 - 7914.732: 92.0407% ( 20) 00:07:33.678 7914.732 - 7965.145: 92.1298% ( 17) 00:07:33.678 7965.145 - 8015.557: 92.2347% ( 20) 00:07:33.678 8015.557 - 8065.969: 92.3920% ( 30) 00:07:33.678 8065.969 - 8116.382: 92.5493% ( 30) 00:07:33.678 8116.382 - 8166.794: 92.7223% ( 33) 00:07:33.678 8166.794 - 8217.206: 92.8691% ( 28) 00:07:33.678 8217.206 - 8267.618: 93.0055% ( 26) 00:07:33.678 8267.618 - 8318.031: 93.1628% ( 30) 00:07:33.678 8318.031 - 8368.443: 93.3515% ( 36) 00:07:33.678 8368.443 - 8418.855: 93.5141% ( 31) 00:07:33.678 8418.855 - 8469.268: 93.6346% ( 23) 00:07:33.678 8469.268 - 8519.680: 93.7815% ( 28) 00:07:33.678 8519.680 - 8570.092: 93.9335% ( 29) 00:07:33.678 8570.092 - 8620.505: 94.0594% ( 24) 00:07:33.678 8620.505 - 8670.917: 94.1799% ( 23) 00:07:33.678 8670.917 - 8721.329: 94.3372% ( 30) 00:07:33.678 8721.329 - 8771.742: 94.4736% ( 26) 00:07:33.678 8771.742 - 8822.154: 94.5784% ( 20) 00:07:33.678 8822.154 - 8872.566: 94.6571% ( 15) 00:07:33.678 8872.566 - 8922.978: 94.7620% ( 20) 00:07:33.678 8922.978 - 8973.391: 94.8773% ( 22) 00:07:33.678 8973.391 - 9023.803: 94.9717% ( 18) 00:07:33.678 9023.803 - 9074.215: 95.0923% ( 23) 00:07:33.678 9074.215 - 9124.628: 95.2129% ( 23) 00:07:33.678 9124.628 - 9175.040: 95.3230% ( 21) 00:07:33.678 9175.040 - 9225.452: 95.4593% ( 26) 00:07:33.678 9225.452 - 9275.865: 95.5956% ( 26) 00:07:33.678 9275.865 - 9326.277: 95.7529% ( 30) 00:07:33.678 9326.277 - 9376.689: 95.8630% ( 21) 00:07:33.678 9376.689 - 9427.102: 95.9836% ( 23) 00:07:33.678 9427.102 - 9477.514: 96.0885% ( 20) 00:07:33.678 9477.514 - 9527.926: 96.1776% ( 17) 00:07:33.678 9527.926 - 9578.338: 96.2720% ( 18) 00:07:33.678 9578.338 - 9628.751: 96.3769% ( 20) 00:07:33.678 9628.751 - 9679.163: 96.4713% ( 18) 00:07:33.679 9679.163 - 9729.575: 96.5814% ( 21) 00:07:33.679 9729.575 - 9779.988: 96.6705% ( 17) 00:07:33.679 9779.988 - 9830.400: 96.7387% ( 13) 00:07:33.679 9830.400 - 9880.812: 96.8068% ( 13) 00:07:33.679 9880.812 - 9931.225: 96.8802% ( 14) 00:07:33.679 9931.225 - 9981.637: 96.9694% ( 17) 00:07:33.679 9981.637 - 10032.049: 97.0375% ( 13) 00:07:33.679 10032.049 - 10082.462: 97.1424% ( 20) 00:07:33.679 10082.462 - 10132.874: 97.2263% ( 16) 00:07:33.679 10132.874 - 10183.286: 97.2997% ( 14) 00:07:33.679 10183.286 - 10233.698: 97.3626% ( 12) 00:07:33.679 10233.698 - 10284.111: 97.4203% ( 11) 00:07:33.679 10284.111 - 10334.523: 97.4937% ( 14) 00:07:33.679 10334.523 - 10384.935: 97.5461% ( 10) 00:07:33.679 10384.935 - 10435.348: 97.5933% ( 9) 00:07:33.679 10435.348 - 10485.760: 97.6458% ( 10) 00:07:33.679 10485.760 - 10536.172: 97.6982% ( 10) 00:07:33.679 10536.172 - 10586.585: 97.7559% ( 11) 00:07:33.679 10586.585 - 10636.997: 97.8135% ( 11) 00:07:33.679 10636.997 - 10687.409: 97.8607% ( 9) 00:07:33.679 10687.409 - 10737.822: 97.9237% ( 12) 00:07:33.679 10737.822 - 10788.234: 97.9708% ( 9) 00:07:33.679 10788.234 - 10838.646: 98.0128% ( 8) 00:07:33.679 10838.646 - 10889.058: 98.0338% ( 4) 00:07:33.679 10889.058 - 10939.471: 98.0600% ( 5) 00:07:33.679 10939.471 - 10989.883: 98.0810% ( 4) 00:07:33.679 10989.883 - 11040.295: 98.1072% ( 5) 00:07:33.679 11040.295 - 11090.708: 98.1281% ( 4) 00:07:33.679 11090.708 - 11141.120: 98.1544% ( 5) 00:07:33.679 11141.120 - 11191.532: 98.1701% ( 3) 00:07:33.679 11191.532 - 11241.945: 98.1858% ( 3) 00:07:33.679 11241.945 - 11292.357: 98.1963% ( 2) 00:07:33.679 11292.357 - 11342.769: 98.2120% ( 3) 00:07:33.679 11342.769 - 11393.182: 98.2225% ( 2) 00:07:33.679 11393.182 - 11443.594: 98.2383% ( 3) 00:07:33.679 11443.594 - 11494.006: 98.2487% ( 2) 00:07:33.679 11494.006 - 11544.418: 98.2645% ( 3) 00:07:33.679 11544.418 - 11594.831: 98.2750% ( 2) 00:07:33.679 11594.831 - 11645.243: 98.2907% ( 3) 00:07:33.679 11645.243 - 11695.655: 98.3012% ( 2) 00:07:33.679 11695.655 - 11746.068: 98.3117% ( 2) 00:07:33.679 11746.068 - 11796.480: 98.3221% ( 2) 00:07:33.679 11998.129 - 12048.542: 98.3274% ( 1) 00:07:33.679 12048.542 - 12098.954: 98.3431% ( 3) 00:07:33.679 12098.954 - 12149.366: 98.3589% ( 3) 00:07:33.679 12149.366 - 12199.778: 98.3693% ( 2) 00:07:33.679 12199.778 - 12250.191: 98.3851% ( 3) 00:07:33.679 12250.191 - 12300.603: 98.4008% ( 3) 00:07:33.679 12300.603 - 12351.015: 98.4165% ( 3) 00:07:33.679 12351.015 - 12401.428: 98.4323% ( 3) 00:07:33.679 12401.428 - 12451.840: 98.4427% ( 2) 00:07:33.679 12451.840 - 12502.252: 98.4585% ( 3) 00:07:33.679 12502.252 - 12552.665: 98.4742% ( 3) 00:07:33.679 12552.665 - 12603.077: 98.4899% ( 3) 00:07:33.679 12603.077 - 12653.489: 98.5057% ( 3) 00:07:33.679 12653.489 - 12703.902: 98.5214% ( 3) 00:07:33.679 12703.902 - 12754.314: 98.5319% ( 2) 00:07:33.679 12754.314 - 12804.726: 98.5476% ( 3) 00:07:33.679 12804.726 - 12855.138: 98.5633% ( 3) 00:07:33.679 12855.138 - 12905.551: 98.5791% ( 3) 00:07:33.679 12905.551 - 13006.375: 98.6105% ( 6) 00:07:33.679 13006.375 - 13107.200: 98.6367% ( 5) 00:07:33.679 13107.200 - 13208.025: 98.6577% ( 4) 00:07:33.679 13208.025 - 13308.849: 98.6787% ( 4) 00:07:33.679 13308.849 - 13409.674: 98.7154% ( 7) 00:07:33.679 13409.674 - 13510.498: 98.7573% ( 8) 00:07:33.679 13510.498 - 13611.323: 98.7940% ( 7) 00:07:33.679 13611.323 - 13712.148: 98.8360% ( 8) 00:07:33.679 13712.148 - 13812.972: 98.8727% ( 7) 00:07:33.679 13812.972 - 13913.797: 98.9146% ( 8) 00:07:33.679 13913.797 - 14014.622: 98.9513% ( 7) 00:07:33.679 14014.622 - 14115.446: 98.9880% ( 7) 00:07:33.679 14115.446 - 14216.271: 98.9933% ( 1) 00:07:33.679 14216.271 - 14317.095: 99.0090% ( 3) 00:07:33.679 14317.095 - 14417.920: 99.0247% ( 3) 00:07:33.679 14417.920 - 14518.745: 99.0457% ( 4) 00:07:33.679 14518.745 - 14619.569: 99.0772% ( 6) 00:07:33.679 14619.569 - 14720.394: 99.0929% ( 3) 00:07:33.679 14720.394 - 14821.218: 99.1139% ( 4) 00:07:33.679 14821.218 - 14922.043: 99.1349% ( 4) 00:07:33.679 14922.043 - 15022.868: 99.1611% ( 5) 00:07:33.679 15022.868 - 15123.692: 99.1925% ( 6) 00:07:33.679 15123.692 - 15224.517: 99.2240% ( 6) 00:07:33.679 15224.517 - 15325.342: 99.2607% ( 7) 00:07:33.679 15325.342 - 15426.166: 99.2922% ( 6) 00:07:33.679 15426.166 - 15526.991: 99.3236% ( 6) 00:07:33.679 15526.991 - 15627.815: 99.3289% ( 1) 00:07:33.679 30045.735 - 30247.385: 99.3393% ( 2) 00:07:33.679 30247.385 - 30449.034: 99.3813% ( 8) 00:07:33.679 30449.034 - 30650.683: 99.4232% ( 8) 00:07:33.679 30650.683 - 30852.332: 99.4704% ( 9) 00:07:33.679 30852.332 - 31053.982: 99.5124% ( 8) 00:07:33.679 31053.982 - 31255.631: 99.5543% ( 8) 00:07:33.679 31255.631 - 31457.280: 99.6015% ( 9) 00:07:33.679 31457.280 - 31658.929: 99.6435% ( 8) 00:07:33.679 31658.929 - 31860.578: 99.6644% ( 4) 00:07:33.679 35893.563 - 36095.212: 99.7116% ( 9) 00:07:33.679 36095.212 - 36296.862: 99.7483% ( 7) 00:07:33.679 36296.862 - 36498.511: 99.7955% ( 9) 00:07:33.679 36498.511 - 36700.160: 99.8375% ( 8) 00:07:33.679 36700.160 - 36901.809: 99.8846% ( 9) 00:07:33.679 36901.809 - 37103.458: 99.9266% ( 8) 00:07:33.679 37103.458 - 37305.108: 99.9685% ( 8) 00:07:33.679 37305.108 - 37506.757: 100.0000% ( 6) 00:07:33.679 00:07:33.679 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:33.679 ============================================================================== 00:07:33.679 Range in us Cumulative IO count 00:07:33.679 5545.354 - 5570.560: 0.0472% ( 9) 00:07:33.679 5570.560 - 5595.766: 0.1940% ( 28) 00:07:33.679 5595.766 - 5620.972: 0.4824% ( 55) 00:07:33.679 5620.972 - 5646.178: 1.1273% ( 123) 00:07:33.679 5646.178 - 5671.385: 2.1602% ( 197) 00:07:33.679 5671.385 - 5696.591: 3.8800% ( 328) 00:07:33.679 5696.591 - 5721.797: 6.4807% ( 496) 00:07:33.679 5721.797 - 5747.003: 8.7196% ( 427) 00:07:33.679 5747.003 - 5772.209: 10.6491% ( 368) 00:07:33.679 5772.209 - 5797.415: 12.3532% ( 325) 00:07:33.679 5797.415 - 5822.622: 13.6430% ( 246) 00:07:33.679 5822.622 - 5847.828: 14.9172% ( 243) 00:07:33.679 5847.828 - 5873.034: 16.1965% ( 244) 00:07:33.679 5873.034 - 5898.240: 17.4811% ( 245) 00:07:33.679 5898.240 - 5923.446: 18.8129% ( 254) 00:07:33.679 5923.446 - 5948.652: 20.4226% ( 307) 00:07:33.679 5948.652 - 5973.858: 22.0952% ( 319) 00:07:33.679 5973.858 - 5999.065: 23.9880% ( 361) 00:07:33.679 5999.065 - 6024.271: 26.0329% ( 390) 00:07:33.679 6024.271 - 6049.477: 28.1407% ( 402) 00:07:33.679 6049.477 - 6074.683: 30.5631% ( 462) 00:07:33.679 6074.683 - 6099.889: 33.1848% ( 500) 00:07:33.679 6099.889 - 6125.095: 35.6281% ( 466) 00:07:33.679 6125.095 - 6150.302: 38.0558% ( 463) 00:07:33.679 6150.302 - 6175.508: 40.4415% ( 455) 00:07:33.679 6175.508 - 6200.714: 42.7328% ( 437) 00:07:33.679 6200.714 - 6225.920: 45.1552% ( 462) 00:07:33.679 6225.920 - 6251.126: 47.6195% ( 470) 00:07:33.679 6251.126 - 6276.332: 50.0419% ( 462) 00:07:33.679 6276.332 - 6301.538: 52.5692% ( 482) 00:07:33.679 6301.538 - 6326.745: 55.0545% ( 474) 00:07:33.679 6326.745 - 6351.951: 57.5975% ( 485) 00:07:33.679 6351.951 - 6377.157: 60.1405% ( 485) 00:07:33.679 6377.157 - 6402.363: 62.6521% ( 479) 00:07:33.679 6402.363 - 6427.569: 64.9748% ( 443) 00:07:33.679 6427.569 - 6452.775: 66.8205% ( 352) 00:07:33.679 6452.775 - 6503.188: 69.8458% ( 577) 00:07:33.679 6503.188 - 6553.600: 72.5881% ( 523) 00:07:33.679 6553.600 - 6604.012: 75.1625% ( 491) 00:07:33.679 6604.012 - 6654.425: 77.7055% ( 485) 00:07:33.679 6654.425 - 6704.837: 80.1961% ( 475) 00:07:33.679 6704.837 - 6755.249: 82.5713% ( 453) 00:07:33.679 6755.249 - 6805.662: 84.6214% ( 391) 00:07:33.679 6805.662 - 6856.074: 86.0686% ( 276) 00:07:33.679 6856.074 - 6906.486: 87.1749% ( 211) 00:07:33.679 6906.486 - 6956.898: 88.0243% ( 162) 00:07:33.679 6956.898 - 7007.311: 88.7269% ( 134) 00:07:33.679 7007.311 - 7057.723: 89.3089% ( 111) 00:07:33.679 7057.723 - 7108.135: 89.7703% ( 88) 00:07:33.679 7108.135 - 7158.548: 90.1112% ( 65) 00:07:33.679 7158.548 - 7208.960: 90.3261% ( 41) 00:07:33.679 7208.960 - 7259.372: 90.4677% ( 27) 00:07:33.679 7259.372 - 7309.785: 90.5726% ( 20) 00:07:33.679 7309.785 - 7360.197: 90.6774% ( 20) 00:07:33.679 7360.197 - 7410.609: 90.7875% ( 21) 00:07:33.679 7410.609 - 7461.022: 90.8609% ( 14) 00:07:33.679 7461.022 - 7511.434: 90.9920% ( 25) 00:07:33.679 7511.434 - 7561.846: 91.1231% ( 25) 00:07:33.679 7561.846 - 7612.258: 91.3014% ( 34) 00:07:33.679 7612.258 - 7662.671: 91.4272% ( 24) 00:07:33.679 7662.671 - 7713.083: 91.6003% ( 33) 00:07:33.679 7713.083 - 7763.495: 91.7890% ( 36) 00:07:33.679 7763.495 - 7813.908: 91.9725% ( 35) 00:07:33.679 7813.908 - 7864.320: 92.1718% ( 38) 00:07:33.679 7864.320 - 7914.732: 92.3343% ( 31) 00:07:33.679 7914.732 - 7965.145: 92.4811% ( 28) 00:07:33.679 7965.145 - 8015.557: 92.6279% ( 28) 00:07:33.679 8015.557 - 8065.969: 92.7643% ( 26) 00:07:33.679 8065.969 - 8116.382: 92.9111% ( 28) 00:07:33.679 8116.382 - 8166.794: 93.0841% ( 33) 00:07:33.679 8166.794 - 8217.206: 93.2466% ( 31) 00:07:33.679 8217.206 - 8267.618: 93.3935% ( 28) 00:07:33.679 8267.618 - 8318.031: 93.5455% ( 29) 00:07:33.679 8318.031 - 8368.443: 93.6976% ( 29) 00:07:33.679 8368.443 - 8418.855: 93.8182% ( 23) 00:07:33.679 8418.855 - 8469.268: 93.9230% ( 20) 00:07:33.679 8469.268 - 8519.680: 94.0279% ( 20) 00:07:33.679 8519.680 - 8570.092: 94.1223% ( 18) 00:07:33.679 8570.092 - 8620.505: 94.2062% ( 16) 00:07:33.679 8620.505 - 8670.917: 94.3005% ( 18) 00:07:33.679 8670.917 - 8721.329: 94.3897% ( 17) 00:07:33.679 8721.329 - 8771.742: 94.4683% ( 15) 00:07:33.679 8771.742 - 8822.154: 94.5522% ( 16) 00:07:33.679 8822.154 - 8872.566: 94.6256% ( 14) 00:07:33.679 8872.566 - 8922.978: 94.6938% ( 13) 00:07:33.679 8922.978 - 8973.391: 94.7724% ( 15) 00:07:33.680 8973.391 - 9023.803: 94.8930% ( 23) 00:07:33.680 9023.803 - 9074.215: 95.0346% ( 27) 00:07:33.680 9074.215 - 9124.628: 95.1395% ( 20) 00:07:33.680 9124.628 - 9175.040: 95.2443% ( 20) 00:07:33.680 9175.040 - 9225.452: 95.3597% ( 22) 00:07:33.680 9225.452 - 9275.865: 95.5380% ( 34) 00:07:33.680 9275.865 - 9326.277: 95.6690% ( 25) 00:07:33.680 9326.277 - 9376.689: 95.7739% ( 20) 00:07:33.680 9376.689 - 9427.102: 95.8945% ( 23) 00:07:33.680 9427.102 - 9477.514: 96.0256% ( 25) 00:07:33.680 9477.514 - 9527.926: 96.1409% ( 22) 00:07:33.680 9527.926 - 9578.338: 96.2720% ( 25) 00:07:33.680 9578.338 - 9628.751: 96.4188% ( 28) 00:07:33.680 9628.751 - 9679.163: 96.5447% ( 24) 00:07:33.680 9679.163 - 9729.575: 96.6653% ( 23) 00:07:33.680 9729.575 - 9779.988: 96.7806% ( 22) 00:07:33.680 9779.988 - 9830.400: 96.8540% ( 14) 00:07:33.680 9830.400 - 9880.812: 96.9327% ( 15) 00:07:33.680 9880.812 - 9931.225: 96.9956% ( 12) 00:07:33.680 9931.225 - 9981.637: 97.0585% ( 12) 00:07:33.680 9981.637 - 10032.049: 97.1005% ( 8) 00:07:33.680 10032.049 - 10082.462: 97.1319% ( 6) 00:07:33.680 10082.462 - 10132.874: 97.1791% ( 9) 00:07:33.680 10132.874 - 10183.286: 97.2106% ( 6) 00:07:33.680 10183.286 - 10233.698: 97.2525% ( 8) 00:07:33.680 10233.698 - 10284.111: 97.2840% ( 6) 00:07:33.680 10284.111 - 10334.523: 97.3259% ( 8) 00:07:33.680 10334.523 - 10384.935: 97.3626% ( 7) 00:07:33.680 10384.935 - 10435.348: 97.3993% ( 7) 00:07:33.680 10435.348 - 10485.760: 97.4727% ( 14) 00:07:33.680 10485.760 - 10536.172: 97.5147% ( 8) 00:07:33.680 10536.172 - 10586.585: 97.5933% ( 15) 00:07:33.680 10586.585 - 10636.997: 97.6667% ( 14) 00:07:33.680 10636.997 - 10687.409: 97.7401% ( 14) 00:07:33.680 10687.409 - 10737.822: 97.7926% ( 10) 00:07:33.680 10737.822 - 10788.234: 97.8607% ( 13) 00:07:33.680 10788.234 - 10838.646: 97.9132% ( 10) 00:07:33.680 10838.646 - 10889.058: 97.9394% ( 5) 00:07:33.680 10889.058 - 10939.471: 97.9813% ( 8) 00:07:33.680 10939.471 - 10989.883: 98.0233% ( 8) 00:07:33.680 10989.883 - 11040.295: 98.0495% ( 5) 00:07:33.680 11040.295 - 11090.708: 98.0757% ( 5) 00:07:33.680 11090.708 - 11141.120: 98.1124% ( 7) 00:07:33.680 11141.120 - 11191.532: 98.1439% ( 6) 00:07:33.680 11191.532 - 11241.945: 98.1648% ( 4) 00:07:33.680 11241.945 - 11292.357: 98.1753% ( 2) 00:07:33.680 11292.357 - 11342.769: 98.1911% ( 3) 00:07:33.680 11342.769 - 11393.182: 98.2016% ( 2) 00:07:33.680 11393.182 - 11443.594: 98.2173% ( 3) 00:07:33.680 11443.594 - 11494.006: 98.2278% ( 2) 00:07:33.680 11494.006 - 11544.418: 98.2435% ( 3) 00:07:33.680 11544.418 - 11594.831: 98.2540% ( 2) 00:07:33.680 11594.831 - 11645.243: 98.2645% ( 2) 00:07:33.680 11645.243 - 11695.655: 98.2802% ( 3) 00:07:33.680 11695.655 - 11746.068: 98.2907% ( 2) 00:07:33.680 11746.068 - 11796.480: 98.3064% ( 3) 00:07:33.680 11796.480 - 11846.892: 98.3169% ( 2) 00:07:33.680 11846.892 - 11897.305: 98.3221% ( 1) 00:07:33.680 11897.305 - 11947.717: 98.3326% ( 2) 00:07:33.680 11947.717 - 11998.129: 98.3431% ( 2) 00:07:33.680 11998.129 - 12048.542: 98.3536% ( 2) 00:07:33.680 12048.542 - 12098.954: 98.3589% ( 1) 00:07:33.680 12098.954 - 12149.366: 98.3693% ( 2) 00:07:33.680 12149.366 - 12199.778: 98.3798% ( 2) 00:07:33.680 12199.778 - 12250.191: 98.3903% ( 2) 00:07:33.680 12250.191 - 12300.603: 98.4008% ( 2) 00:07:33.680 12300.603 - 12351.015: 98.4113% ( 2) 00:07:33.680 12351.015 - 12401.428: 98.4218% ( 2) 00:07:33.680 12401.428 - 12451.840: 98.4270% ( 1) 00:07:33.680 12451.840 - 12502.252: 98.4427% ( 3) 00:07:33.680 12502.252 - 12552.665: 98.4585% ( 3) 00:07:33.680 12552.665 - 12603.077: 98.4794% ( 4) 00:07:33.680 12603.077 - 12653.489: 98.5004% ( 4) 00:07:33.680 12653.489 - 12703.902: 98.5214% ( 4) 00:07:33.680 12703.902 - 12754.314: 98.5371% ( 3) 00:07:33.680 12754.314 - 12804.726: 98.5581% ( 4) 00:07:33.680 12804.726 - 12855.138: 98.5791% ( 4) 00:07:33.680 12855.138 - 12905.551: 98.5948% ( 3) 00:07:33.680 12905.551 - 13006.375: 98.6367% ( 8) 00:07:33.680 13006.375 - 13107.200: 98.6577% ( 4) 00:07:33.680 13812.972 - 13913.797: 98.6892% ( 6) 00:07:33.680 13913.797 - 14014.622: 98.7311% ( 8) 00:07:33.680 14014.622 - 14115.446: 98.7678% ( 7) 00:07:33.680 14115.446 - 14216.271: 98.8150% ( 9) 00:07:33.680 14216.271 - 14317.095: 98.8727% ( 11) 00:07:33.680 14317.095 - 14417.920: 98.9356% ( 12) 00:07:33.680 14417.920 - 14518.745: 98.9985% ( 12) 00:07:33.680 14518.745 - 14619.569: 99.0615% ( 12) 00:07:33.680 14619.569 - 14720.394: 99.1086% ( 9) 00:07:33.680 14720.394 - 14821.218: 99.1296% ( 4) 00:07:33.680 14821.218 - 14922.043: 99.1611% ( 6) 00:07:33.680 14922.043 - 15022.868: 99.1925% ( 6) 00:07:33.680 15022.868 - 15123.692: 99.2240% ( 6) 00:07:33.680 15123.692 - 15224.517: 99.2607% ( 7) 00:07:33.680 15224.517 - 15325.342: 99.2922% ( 6) 00:07:33.680 15325.342 - 15426.166: 99.3236% ( 6) 00:07:33.680 15426.166 - 15526.991: 99.3289% ( 1) 00:07:33.680 29037.489 - 29239.138: 99.3498% ( 4) 00:07:33.680 29239.138 - 29440.788: 99.3918% ( 8) 00:07:33.680 29440.788 - 29642.437: 99.4337% ( 8) 00:07:33.680 29642.437 - 29844.086: 99.4757% ( 8) 00:07:33.680 29844.086 - 30045.735: 99.5176% ( 8) 00:07:33.680 30045.735 - 30247.385: 99.5596% ( 8) 00:07:33.680 30247.385 - 30449.034: 99.6015% ( 8) 00:07:33.680 30449.034 - 30650.683: 99.6435% ( 8) 00:07:33.680 30650.683 - 30852.332: 99.6644% ( 4) 00:07:33.680 34885.317 - 35086.966: 99.7011% ( 7) 00:07:33.680 35086.966 - 35288.615: 99.7431% ( 8) 00:07:33.680 35288.615 - 35490.265: 99.7798% ( 7) 00:07:33.680 35490.265 - 35691.914: 99.8270% ( 9) 00:07:33.680 35691.914 - 35893.563: 99.8689% ( 8) 00:07:33.680 35893.563 - 36095.212: 99.9161% ( 9) 00:07:33.680 36095.212 - 36296.862: 99.9581% ( 8) 00:07:33.680 36296.862 - 36498.511: 100.0000% ( 8) 00:07:33.680 00:07:33.680 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:33.680 ============================================================================== 00:07:33.680 Range in us Cumulative IO count 00:07:33.680 5520.148 - 5545.354: 0.0105% ( 2) 00:07:33.680 5545.354 - 5570.560: 0.0524% ( 8) 00:07:33.680 5570.560 - 5595.766: 0.2779% ( 43) 00:07:33.680 5595.766 - 5620.972: 0.5715% ( 56) 00:07:33.680 5620.972 - 5646.178: 1.3318% ( 145) 00:07:33.680 5646.178 - 5671.385: 2.5797% ( 238) 00:07:33.680 5671.385 - 5696.591: 4.5931% ( 384) 00:07:33.680 5696.591 - 5721.797: 6.5489% ( 373) 00:07:33.680 5721.797 - 5747.003: 8.5466% ( 381) 00:07:33.680 5747.003 - 5772.209: 10.4656% ( 366) 00:07:33.680 5772.209 - 5797.415: 12.1172% ( 315) 00:07:33.680 5797.415 - 5822.622: 13.6378% ( 290) 00:07:33.680 5822.622 - 5847.828: 15.0115% ( 262) 00:07:33.680 5847.828 - 5873.034: 16.3328% ( 252) 00:07:33.680 5873.034 - 5898.240: 17.6594% ( 253) 00:07:33.680 5898.240 - 5923.446: 19.1170% ( 278) 00:07:33.680 5923.446 - 5948.652: 20.6009% ( 283) 00:07:33.680 5948.652 - 5973.858: 22.2735% ( 319) 00:07:33.680 5973.858 - 5999.065: 24.0877% ( 346) 00:07:33.680 5999.065 - 6024.271: 26.0801% ( 380) 00:07:33.680 6024.271 - 6049.477: 28.3138% ( 426) 00:07:33.680 6049.477 - 6074.683: 30.5526% ( 427) 00:07:33.680 6074.683 - 6099.889: 32.9016% ( 448) 00:07:33.680 6099.889 - 6125.095: 35.1772% ( 434) 00:07:33.680 6125.095 - 6150.302: 37.6678% ( 475) 00:07:33.680 6150.302 - 6175.508: 40.1741% ( 478) 00:07:33.680 6175.508 - 6200.714: 42.6489% ( 472) 00:07:33.680 6200.714 - 6225.920: 45.0451% ( 457) 00:07:33.680 6225.920 - 6251.126: 47.4990% ( 468) 00:07:33.680 6251.126 - 6276.332: 50.0105% ( 479) 00:07:33.680 6276.332 - 6301.538: 52.5273% ( 480) 00:07:33.680 6301.538 - 6326.745: 55.0283% ( 477) 00:07:33.680 6326.745 - 6351.951: 57.5766% ( 486) 00:07:33.680 6351.951 - 6377.157: 60.1562% ( 492) 00:07:33.680 6377.157 - 6402.363: 62.6258% ( 471) 00:07:33.680 6402.363 - 6427.569: 64.8857% ( 431) 00:07:33.680 6427.569 - 6452.775: 66.7576% ( 357) 00:07:33.680 6452.775 - 6503.188: 69.8773% ( 595) 00:07:33.680 6503.188 - 6553.600: 72.5042% ( 501) 00:07:33.680 6553.600 - 6604.012: 75.1573% ( 506) 00:07:33.680 6604.012 - 6654.425: 77.7580% ( 496) 00:07:33.680 6654.425 - 6704.837: 80.1594% ( 458) 00:07:33.680 6704.837 - 6755.249: 82.4717% ( 441) 00:07:33.680 6755.249 - 6805.662: 84.4641% ( 380) 00:07:33.680 6805.662 - 6856.074: 86.0529% ( 303) 00:07:33.680 6856.074 - 6906.486: 87.1120% ( 202) 00:07:33.680 6906.486 - 6956.898: 87.9352% ( 157) 00:07:33.680 6956.898 - 7007.311: 88.6221% ( 131) 00:07:33.680 7007.311 - 7057.723: 89.2250% ( 115) 00:07:33.680 7057.723 - 7108.135: 89.7074% ( 92) 00:07:33.680 7108.135 - 7158.548: 90.0482% ( 65) 00:07:33.680 7158.548 - 7208.960: 90.2842% ( 45) 00:07:33.680 7208.960 - 7259.372: 90.4100% ( 24) 00:07:33.680 7259.372 - 7309.785: 90.5306% ( 23) 00:07:33.680 7309.785 - 7360.197: 90.6145% ( 16) 00:07:33.680 7360.197 - 7410.609: 90.6827% ( 13) 00:07:33.680 7410.609 - 7461.022: 90.7666% ( 16) 00:07:33.680 7461.022 - 7511.434: 90.8872% ( 23) 00:07:33.680 7511.434 - 7561.846: 91.0182% ( 25) 00:07:33.680 7561.846 - 7612.258: 91.1388% ( 23) 00:07:33.680 7612.258 - 7662.671: 91.2490% ( 21) 00:07:33.680 7662.671 - 7713.083: 91.4167% ( 32) 00:07:33.680 7713.083 - 7763.495: 91.6894% ( 52) 00:07:33.680 7763.495 - 7813.908: 91.8729% ( 35) 00:07:33.680 7813.908 - 7864.320: 92.0197% ( 28) 00:07:33.680 7864.320 - 7914.732: 92.1718% ( 29) 00:07:33.680 7914.732 - 7965.145: 92.3448% ( 33) 00:07:33.680 7965.145 - 8015.557: 92.5126% ( 32) 00:07:33.680 8015.557 - 8065.969: 92.6646% ( 29) 00:07:33.680 8065.969 - 8116.382: 92.8482% ( 35) 00:07:33.680 8116.382 - 8166.794: 93.0159% ( 32) 00:07:33.680 8166.794 - 8217.206: 93.1575% ( 27) 00:07:33.680 8217.206 - 8267.618: 93.2781% ( 23) 00:07:33.680 8267.618 - 8318.031: 93.4039% ( 24) 00:07:33.680 8318.031 - 8368.443: 93.5245% ( 23) 00:07:33.681 8368.443 - 8418.855: 93.6294% ( 20) 00:07:33.681 8418.855 - 8469.268: 93.7238% ( 18) 00:07:33.681 8469.268 - 8519.680: 93.8286% ( 20) 00:07:33.681 8519.680 - 8570.092: 93.9283% ( 19) 00:07:33.681 8570.092 - 8620.505: 94.0174% ( 17) 00:07:33.681 8620.505 - 8670.917: 94.1065% ( 17) 00:07:33.681 8670.917 - 8721.329: 94.2114% ( 20) 00:07:33.681 8721.329 - 8771.742: 94.3320% ( 23) 00:07:33.681 8771.742 - 8822.154: 94.4316% ( 19) 00:07:33.681 8822.154 - 8872.566: 94.5365% ( 20) 00:07:33.681 8872.566 - 8922.978: 94.6728% ( 26) 00:07:33.681 8922.978 - 8973.391: 94.7672% ( 18) 00:07:33.681 8973.391 - 9023.803: 94.8773% ( 21) 00:07:33.681 9023.803 - 9074.215: 94.9717% ( 18) 00:07:33.681 9074.215 - 9124.628: 95.0766% ( 20) 00:07:33.681 9124.628 - 9175.040: 95.1919% ( 22) 00:07:33.681 9175.040 - 9225.452: 95.4016% ( 40) 00:07:33.681 9225.452 - 9275.865: 95.5380% ( 26) 00:07:33.681 9275.865 - 9326.277: 95.6848% ( 28) 00:07:33.681 9326.277 - 9376.689: 95.8526% ( 32) 00:07:33.681 9376.689 - 9427.102: 96.0099% ( 30) 00:07:33.681 9427.102 - 9477.514: 96.1986% ( 36) 00:07:33.681 9477.514 - 9527.926: 96.3140% ( 22) 00:07:33.681 9527.926 - 9578.338: 96.4451% ( 25) 00:07:33.681 9578.338 - 9628.751: 96.5447% ( 19) 00:07:33.681 9628.751 - 9679.163: 96.6338% ( 17) 00:07:33.681 9679.163 - 9729.575: 96.7072% ( 14) 00:07:33.681 9729.575 - 9779.988: 96.7911% ( 16) 00:07:33.681 9779.988 - 9830.400: 96.8593% ( 13) 00:07:33.681 9830.400 - 9880.812: 96.9274% ( 13) 00:07:33.681 9880.812 - 9931.225: 96.9904% ( 12) 00:07:33.681 9931.225 - 9981.637: 97.0375% ( 9) 00:07:33.681 9981.637 - 10032.049: 97.0900% ( 10) 00:07:33.681 10032.049 - 10082.462: 97.1372% ( 9) 00:07:33.681 10082.462 - 10132.874: 97.1844% ( 9) 00:07:33.681 10132.874 - 10183.286: 97.2211% ( 7) 00:07:33.681 10183.286 - 10233.698: 97.2630% ( 8) 00:07:33.681 10233.698 - 10284.111: 97.2787% ( 3) 00:07:33.681 10284.111 - 10334.523: 97.3259% ( 9) 00:07:33.681 10334.523 - 10384.935: 97.3731% ( 9) 00:07:33.681 10384.935 - 10435.348: 97.4046% ( 6) 00:07:33.681 10435.348 - 10485.760: 97.4465% ( 8) 00:07:33.681 10485.760 - 10536.172: 97.4832% ( 7) 00:07:33.681 10536.172 - 10586.585: 97.5252% ( 8) 00:07:33.681 10586.585 - 10636.997: 97.5566% ( 6) 00:07:33.681 10636.997 - 10687.409: 97.5986% ( 8) 00:07:33.681 10687.409 - 10737.822: 97.6510% ( 10) 00:07:33.681 10737.822 - 10788.234: 97.6877% ( 7) 00:07:33.681 10788.234 - 10838.646: 97.7401% ( 10) 00:07:33.681 10838.646 - 10889.058: 97.7768% ( 7) 00:07:33.681 10889.058 - 10939.471: 97.8345% ( 11) 00:07:33.681 10939.471 - 10989.883: 97.8765% ( 8) 00:07:33.681 10989.883 - 11040.295: 97.9184% ( 8) 00:07:33.681 11040.295 - 11090.708: 97.9551% ( 7) 00:07:33.681 11090.708 - 11141.120: 98.0023% ( 9) 00:07:33.681 11141.120 - 11191.532: 98.0547% ( 10) 00:07:33.681 11191.532 - 11241.945: 98.0862% ( 6) 00:07:33.681 11241.945 - 11292.357: 98.1386% ( 10) 00:07:33.681 11292.357 - 11342.769: 98.1806% ( 8) 00:07:33.681 11342.769 - 11393.182: 98.2278% ( 9) 00:07:33.681 11393.182 - 11443.594: 98.2592% ( 6) 00:07:33.681 11443.594 - 11494.006: 98.2907% ( 6) 00:07:33.681 11494.006 - 11544.418: 98.3169% ( 5) 00:07:33.681 11544.418 - 11594.831: 98.3221% ( 1) 00:07:33.681 11594.831 - 11645.243: 98.3274% ( 1) 00:07:33.681 11645.243 - 11695.655: 98.3484% ( 4) 00:07:33.681 11695.655 - 11746.068: 98.3746% ( 5) 00:07:33.681 11746.068 - 11796.480: 98.3903% ( 3) 00:07:33.681 11796.480 - 11846.892: 98.4113% ( 4) 00:07:33.681 11846.892 - 11897.305: 98.4270% ( 3) 00:07:33.681 11897.305 - 11947.717: 98.4480% ( 4) 00:07:33.681 11947.717 - 11998.129: 98.4637% ( 3) 00:07:33.681 11998.129 - 12048.542: 98.4794% ( 3) 00:07:33.681 12048.542 - 12098.954: 98.5004% ( 4) 00:07:33.681 12098.954 - 12149.366: 98.5214% ( 4) 00:07:33.681 12149.366 - 12199.778: 98.5371% ( 3) 00:07:33.681 12199.778 - 12250.191: 98.5581% ( 4) 00:07:33.681 12250.191 - 12300.603: 98.5738% ( 3) 00:07:33.681 12300.603 - 12351.015: 98.5948% ( 4) 00:07:33.681 12351.015 - 12401.428: 98.6158% ( 4) 00:07:33.681 12401.428 - 12451.840: 98.6315% ( 3) 00:07:33.681 12451.840 - 12502.252: 98.6525% ( 4) 00:07:33.681 12502.252 - 12552.665: 98.6577% ( 1) 00:07:33.681 13812.972 - 13913.797: 98.6682% ( 2) 00:07:33.681 13913.797 - 14014.622: 98.7154% ( 9) 00:07:33.681 14014.622 - 14115.446: 98.7626% ( 9) 00:07:33.681 14115.446 - 14216.271: 98.8307% ( 13) 00:07:33.681 14216.271 - 14317.095: 98.9094% ( 15) 00:07:33.681 14317.095 - 14417.920: 98.9723% ( 12) 00:07:33.681 14417.920 - 14518.745: 99.0457% ( 14) 00:07:33.681 14518.745 - 14619.569: 99.1139% ( 13) 00:07:33.681 14619.569 - 14720.394: 99.1820% ( 13) 00:07:33.681 14720.394 - 14821.218: 99.2240% ( 8) 00:07:33.681 14821.218 - 14922.043: 99.2502% ( 5) 00:07:33.681 14922.043 - 15022.868: 99.2764% ( 5) 00:07:33.681 15022.868 - 15123.692: 99.3026% ( 5) 00:07:33.681 15123.692 - 15224.517: 99.3289% ( 5) 00:07:33.681 27424.295 - 27625.945: 99.3603% ( 6) 00:07:33.681 27625.945 - 27827.594: 99.4023% ( 8) 00:07:33.681 27827.594 - 28029.243: 99.4495% ( 9) 00:07:33.681 28029.243 - 28230.892: 99.4914% ( 8) 00:07:33.681 28230.892 - 28432.542: 99.5333% ( 8) 00:07:33.681 28432.542 - 28634.191: 99.5753% ( 8) 00:07:33.681 28634.191 - 28835.840: 99.6172% ( 8) 00:07:33.681 28835.840 - 29037.489: 99.6644% ( 9) 00:07:33.681 33272.123 - 33473.772: 99.6959% ( 6) 00:07:33.681 33473.772 - 33675.422: 99.7431% ( 9) 00:07:33.681 33675.422 - 33877.071: 99.7850% ( 8) 00:07:33.681 33877.071 - 34078.720: 99.8270% ( 8) 00:07:33.681 34078.720 - 34280.369: 99.8689% ( 8) 00:07:33.681 34280.369 - 34482.018: 99.9109% ( 8) 00:07:33.681 34482.018 - 34683.668: 99.9528% ( 8) 00:07:33.681 34683.668 - 34885.317: 100.0000% ( 9) 00:07:33.681 00:07:33.681 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:33.681 ============================================================================== 00:07:33.681 Range in us Cumulative IO count 00:07:33.681 5545.354 - 5570.560: 0.0052% ( 1) 00:07:33.681 5570.560 - 5595.766: 0.0839% ( 15) 00:07:33.681 5595.766 - 5620.972: 0.2779% ( 37) 00:07:33.681 5620.972 - 5646.178: 0.8809% ( 115) 00:07:33.681 5646.178 - 5671.385: 2.2651% ( 264) 00:07:33.681 5671.385 - 5696.591: 3.9430% ( 320) 00:07:33.681 5696.591 - 5721.797: 5.8515% ( 364) 00:07:33.681 5721.797 - 5747.003: 7.8754% ( 386) 00:07:33.681 5747.003 - 5772.209: 10.2664% ( 456) 00:07:33.681 5772.209 - 5797.415: 12.3742% ( 402) 00:07:33.681 5797.415 - 5822.622: 14.0573% ( 321) 00:07:33.681 5822.622 - 5847.828: 15.5306% ( 281) 00:07:33.681 5847.828 - 5873.034: 16.8939% ( 260) 00:07:33.681 5873.034 - 5898.240: 18.2309% ( 255) 00:07:33.681 5898.240 - 5923.446: 19.5365% ( 249) 00:07:33.681 5923.446 - 5948.652: 20.8211% ( 245) 00:07:33.681 5948.652 - 5973.858: 22.2630% ( 275) 00:07:33.681 5973.858 - 5999.065: 24.0038% ( 332) 00:07:33.681 5999.065 - 6024.271: 25.9333% ( 368) 00:07:33.681 6024.271 - 6049.477: 28.0359% ( 401) 00:07:33.681 6049.477 - 6074.683: 30.3534% ( 442) 00:07:33.681 6074.683 - 6099.889: 32.9069% ( 487) 00:07:33.681 6099.889 - 6125.095: 35.5338% ( 501) 00:07:33.681 6125.095 - 6150.302: 38.0401% ( 478) 00:07:33.681 6150.302 - 6175.508: 40.4467% ( 459) 00:07:33.681 6175.508 - 6200.714: 42.8849% ( 465) 00:07:33.681 6200.714 - 6225.920: 45.1657% ( 435) 00:07:33.681 6225.920 - 6251.126: 47.5566% ( 456) 00:07:33.681 6251.126 - 6276.332: 50.0472% ( 475) 00:07:33.681 6276.332 - 6301.538: 52.5954% ( 486) 00:07:33.681 6301.538 - 6326.745: 55.0965% ( 477) 00:07:33.681 6326.745 - 6351.951: 57.7024% ( 497) 00:07:33.681 6351.951 - 6377.157: 60.3031% ( 496) 00:07:33.681 6377.157 - 6402.363: 62.6835% ( 454) 00:07:33.681 6402.363 - 6427.569: 64.7913% ( 402) 00:07:33.681 6427.569 - 6452.775: 66.7156% ( 367) 00:07:33.681 6452.775 - 6503.188: 69.7777% ( 584) 00:07:33.681 6503.188 - 6553.600: 72.4518% ( 510) 00:07:33.681 6553.600 - 6604.012: 74.9738% ( 481) 00:07:33.681 6604.012 - 6654.425: 77.5325% ( 488) 00:07:33.682 6654.425 - 6704.837: 80.0231% ( 475) 00:07:33.682 6704.837 - 6755.249: 82.3406% ( 442) 00:07:33.682 6755.249 - 6805.662: 84.3802% ( 389) 00:07:33.682 6805.662 - 6856.074: 86.0109% ( 311) 00:07:33.682 6856.074 - 6906.486: 87.0701% ( 202) 00:07:33.682 6906.486 - 6956.898: 87.9195% ( 162) 00:07:33.682 6956.898 - 7007.311: 88.5958% ( 129) 00:07:33.682 7007.311 - 7057.723: 89.1202% ( 100) 00:07:33.682 7057.723 - 7108.135: 89.5868% ( 89) 00:07:33.682 7108.135 - 7158.548: 89.9748% ( 74) 00:07:33.682 7158.548 - 7208.960: 90.2842% ( 59) 00:07:33.682 7208.960 - 7259.372: 90.5149% ( 44) 00:07:33.682 7259.372 - 7309.785: 90.6669% ( 29) 00:07:33.682 7309.785 - 7360.197: 90.7666% ( 19) 00:07:33.682 7360.197 - 7410.609: 90.8609% ( 18) 00:07:33.682 7410.609 - 7461.022: 90.9606% ( 19) 00:07:33.682 7461.022 - 7511.434: 91.1126% ( 29) 00:07:33.682 7511.434 - 7561.846: 91.2437% ( 25) 00:07:33.682 7561.846 - 7612.258: 91.3433% ( 19) 00:07:33.682 7612.258 - 7662.671: 91.4430% ( 19) 00:07:33.682 7662.671 - 7713.083: 91.5478% ( 20) 00:07:33.682 7713.083 - 7763.495: 91.7576% ( 40) 00:07:33.682 7763.495 - 7813.908: 92.0040% ( 47) 00:07:33.682 7813.908 - 7864.320: 92.1403% ( 26) 00:07:33.682 7864.320 - 7914.732: 92.2766% ( 26) 00:07:33.682 7914.732 - 7965.145: 92.4025% ( 24) 00:07:33.682 7965.145 - 8015.557: 92.4759% ( 14) 00:07:33.682 8015.557 - 8065.969: 92.5807% ( 20) 00:07:33.682 8065.969 - 8116.382: 92.6961% ( 22) 00:07:33.682 8116.382 - 8166.794: 92.8010% ( 20) 00:07:33.682 8166.794 - 8217.206: 92.9111% ( 21) 00:07:33.682 8217.206 - 8267.618: 93.0264% ( 22) 00:07:33.682 8267.618 - 8318.031: 93.1470% ( 23) 00:07:33.682 8318.031 - 8368.443: 93.2624% ( 22) 00:07:33.682 8368.443 - 8418.855: 93.3777% ( 22) 00:07:33.682 8418.855 - 8469.268: 93.5036% ( 24) 00:07:33.682 8469.268 - 8519.680: 93.6084% ( 20) 00:07:33.682 8519.680 - 8570.092: 93.7238% ( 22) 00:07:33.682 8570.092 - 8620.505: 93.8601% ( 26) 00:07:33.682 8620.505 - 8670.917: 94.0122% ( 29) 00:07:33.682 8670.917 - 8721.329: 94.1170% ( 20) 00:07:33.682 8721.329 - 8771.742: 94.2429% ( 24) 00:07:33.682 8771.742 - 8822.154: 94.3320% ( 17) 00:07:33.682 8822.154 - 8872.566: 94.3949% ( 12) 00:07:33.682 8872.566 - 8922.978: 94.4945% ( 19) 00:07:33.682 8922.978 - 8973.391: 94.6414% ( 28) 00:07:33.682 8973.391 - 9023.803: 94.7672% ( 24) 00:07:33.682 9023.803 - 9074.215: 94.9297% ( 31) 00:07:33.682 9074.215 - 9124.628: 95.1290% ( 38) 00:07:33.682 9124.628 - 9175.040: 95.2601% ( 25) 00:07:33.682 9175.040 - 9225.452: 95.4750% ( 41) 00:07:33.682 9225.452 - 9275.865: 95.6376% ( 31) 00:07:33.682 9275.865 - 9326.277: 95.8263% ( 36) 00:07:33.682 9326.277 - 9376.689: 96.0099% ( 35) 00:07:33.682 9376.689 - 9427.102: 96.1252% ( 22) 00:07:33.682 9427.102 - 9477.514: 96.2248% ( 19) 00:07:33.682 9477.514 - 9527.926: 96.3349% ( 21) 00:07:33.682 9527.926 - 9578.338: 96.4555% ( 23) 00:07:33.682 9578.338 - 9628.751: 96.5814% ( 24) 00:07:33.682 9628.751 - 9679.163: 96.7072% ( 24) 00:07:33.682 9679.163 - 9729.575: 96.8173% ( 21) 00:07:33.682 9729.575 - 9779.988: 96.9169% ( 19) 00:07:33.682 9779.988 - 9830.400: 97.0218% ( 20) 00:07:33.682 9830.400 - 9880.812: 97.0900% ( 13) 00:07:33.682 9880.812 - 9931.225: 97.1267% ( 7) 00:07:33.682 9931.225 - 9981.637: 97.1581% ( 6) 00:07:33.682 9981.637 - 10032.049: 97.1896% ( 6) 00:07:33.682 10032.049 - 10082.462: 97.2263% ( 7) 00:07:33.682 10082.462 - 10132.874: 97.2578% ( 6) 00:07:33.682 10132.874 - 10183.286: 97.2840% ( 5) 00:07:33.682 10183.286 - 10233.698: 97.3417% ( 11) 00:07:33.682 10233.698 - 10284.111: 97.3731% ( 6) 00:07:33.682 10284.111 - 10334.523: 97.3993% ( 5) 00:07:33.682 10334.523 - 10384.935: 97.4098% ( 2) 00:07:33.682 10384.935 - 10435.348: 97.4151% ( 1) 00:07:33.682 10435.348 - 10485.760: 97.4255% ( 2) 00:07:33.682 10485.760 - 10536.172: 97.4413% ( 3) 00:07:33.682 10536.172 - 10586.585: 97.4675% ( 5) 00:07:33.682 10586.585 - 10636.997: 97.4885% ( 4) 00:07:33.682 10636.997 - 10687.409: 97.5199% ( 6) 00:07:33.682 10687.409 - 10737.822: 97.5461% ( 5) 00:07:33.682 10737.822 - 10788.234: 97.5776% ( 6) 00:07:33.682 10788.234 - 10838.646: 97.6038% ( 5) 00:07:33.682 10838.646 - 10889.058: 97.6248% ( 4) 00:07:33.682 10889.058 - 10939.471: 97.6510% ( 5) 00:07:33.682 10939.471 - 10989.883: 97.6982% ( 9) 00:07:33.682 10989.883 - 11040.295: 97.7401% ( 8) 00:07:33.682 11040.295 - 11090.708: 97.7821% ( 8) 00:07:33.682 11090.708 - 11141.120: 97.8293% ( 9) 00:07:33.682 11141.120 - 11191.532: 97.8712% ( 8) 00:07:33.682 11191.532 - 11241.945: 97.9341% ( 12) 00:07:33.682 11241.945 - 11292.357: 97.9971% ( 12) 00:07:33.682 11292.357 - 11342.769: 98.0443% ( 9) 00:07:33.682 11342.769 - 11393.182: 98.1019% ( 11) 00:07:33.682 11393.182 - 11443.594: 98.1439% ( 8) 00:07:33.682 11443.594 - 11494.006: 98.1911% ( 9) 00:07:33.682 11494.006 - 11544.418: 98.2383% ( 9) 00:07:33.682 11544.418 - 11594.831: 98.2750% ( 7) 00:07:33.682 11594.831 - 11645.243: 98.3169% ( 8) 00:07:33.682 11645.243 - 11695.655: 98.3589% ( 8) 00:07:33.682 11695.655 - 11746.068: 98.4060% ( 9) 00:07:33.682 11746.068 - 11796.480: 98.4532% ( 9) 00:07:33.682 11796.480 - 11846.892: 98.4899% ( 7) 00:07:33.682 11846.892 - 11897.305: 98.5214% ( 6) 00:07:33.682 11897.305 - 11947.717: 98.5476% ( 5) 00:07:33.682 11947.717 - 11998.129: 98.5843% ( 7) 00:07:33.682 11998.129 - 12048.542: 98.6105% ( 5) 00:07:33.682 12048.542 - 12098.954: 98.6263% ( 3) 00:07:33.682 12098.954 - 12149.366: 98.6420% ( 3) 00:07:33.682 12149.366 - 12199.778: 98.6577% ( 3) 00:07:33.682 13712.148 - 13812.972: 98.6787% ( 4) 00:07:33.682 13812.972 - 13913.797: 98.7102% ( 6) 00:07:33.682 13913.797 - 14014.622: 98.7573% ( 9) 00:07:33.682 14014.622 - 14115.446: 98.8098% ( 10) 00:07:33.682 14115.446 - 14216.271: 98.8727% ( 12) 00:07:33.682 14216.271 - 14317.095: 98.9304% ( 11) 00:07:33.682 14317.095 - 14417.920: 98.9933% ( 12) 00:07:33.682 14417.920 - 14518.745: 99.0457% ( 10) 00:07:33.682 14518.745 - 14619.569: 99.1034% ( 11) 00:07:33.682 14619.569 - 14720.394: 99.1611% ( 11) 00:07:33.682 14720.394 - 14821.218: 99.2188% ( 11) 00:07:33.682 14821.218 - 14922.043: 99.2817% ( 12) 00:07:33.682 14922.043 - 15022.868: 99.3236% ( 8) 00:07:33.682 15022.868 - 15123.692: 99.3289% ( 1) 00:07:33.682 25710.277 - 25811.102: 99.3341% ( 1) 00:07:33.682 25811.102 - 26012.751: 99.3760% ( 8) 00:07:33.682 26012.751 - 26214.400: 99.4075% ( 6) 00:07:33.682 26214.400 - 26416.049: 99.4495% ( 8) 00:07:33.682 26416.049 - 26617.698: 99.4914% ( 8) 00:07:33.682 26617.698 - 26819.348: 99.5333% ( 8) 00:07:33.682 26819.348 - 27020.997: 99.5753% ( 8) 00:07:33.682 27020.997 - 27222.646: 99.6225% ( 9) 00:07:33.682 27222.646 - 27424.295: 99.6592% ( 7) 00:07:33.682 27424.295 - 27625.945: 99.6644% ( 1) 00:07:33.682 31457.280 - 31658.929: 99.6697% ( 1) 00:07:33.682 31658.929 - 31860.578: 99.7169% ( 9) 00:07:33.682 31860.578 - 32062.228: 99.7536% ( 7) 00:07:33.682 32062.228 - 32263.877: 99.7955% ( 8) 00:07:33.682 32263.877 - 32465.526: 99.8375% ( 8) 00:07:33.682 32465.526 - 32667.175: 99.8846% ( 9) 00:07:33.682 32667.175 - 32868.825: 99.9214% ( 7) 00:07:33.682 32868.825 - 33070.474: 99.9685% ( 9) 00:07:33.682 33070.474 - 33272.123: 100.0000% ( 6) 00:07:33.682 00:07:33.682 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:33.682 ============================================================================== 00:07:33.682 Range in us Cumulative IO count 00:07:33.682 5545.354 - 5570.560: 0.0366% ( 7) 00:07:33.682 5570.560 - 5595.766: 0.0732% ( 7) 00:07:33.682 5595.766 - 5620.972: 0.2979% ( 43) 00:07:33.682 5620.972 - 5646.178: 0.8361% ( 103) 00:07:33.682 5646.178 - 5671.385: 1.6461% ( 155) 00:07:33.682 5671.385 - 5696.591: 3.1564% ( 289) 00:07:33.682 5696.591 - 5721.797: 5.0899% ( 370) 00:07:33.682 5721.797 - 5747.003: 7.6871% ( 497) 00:07:33.682 5747.003 - 5772.209: 9.9394% ( 431) 00:07:33.682 5772.209 - 5797.415: 11.8990% ( 375) 00:07:33.682 5797.415 - 5822.622: 13.9266% ( 388) 00:07:33.682 5822.622 - 5847.828: 15.6198% ( 324) 00:07:33.682 5847.828 - 5873.034: 16.9053% ( 246) 00:07:33.682 5873.034 - 5898.240: 18.1804% ( 244) 00:07:33.682 5898.240 - 5923.446: 19.4816% ( 249) 00:07:33.682 5923.446 - 5948.652: 20.9657% ( 284) 00:07:33.682 5948.652 - 5973.858: 22.5334% ( 300) 00:07:33.682 5973.858 - 5999.065: 24.3938% ( 356) 00:07:33.682 5999.065 - 6024.271: 26.3535% ( 375) 00:07:33.682 6024.271 - 6049.477: 28.3706% ( 386) 00:07:33.682 6049.477 - 6074.683: 30.4557% ( 399) 00:07:33.682 6074.683 - 6099.889: 32.8752% ( 463) 00:07:33.682 6099.889 - 6125.095: 35.3209% ( 468) 00:07:33.682 6125.095 - 6150.302: 37.7979% ( 474) 00:07:33.682 6150.302 - 6175.508: 40.1651% ( 453) 00:07:33.682 6175.508 - 6200.714: 42.6578% ( 477) 00:07:33.682 6200.714 - 6225.920: 44.9676% ( 442) 00:07:33.682 6225.920 - 6251.126: 47.3244% ( 451) 00:07:33.682 6251.126 - 6276.332: 49.8171% ( 477) 00:07:33.682 6276.332 - 6301.538: 52.3202% ( 479) 00:07:33.682 6301.538 - 6326.745: 54.7920% ( 473) 00:07:33.682 6326.745 - 6351.951: 57.2638% ( 473) 00:07:33.682 6351.951 - 6377.157: 59.7565% ( 477) 00:07:33.682 6377.157 - 6402.363: 62.1865% ( 465) 00:07:33.682 6402.363 - 6427.569: 64.4544% ( 434) 00:07:33.682 6427.569 - 6452.775: 66.4507% ( 382) 00:07:33.682 6452.775 - 6503.188: 69.6018% ( 603) 00:07:33.682 6503.188 - 6553.600: 72.2356% ( 504) 00:07:33.682 6553.600 - 6604.012: 74.7387% ( 479) 00:07:33.682 6604.012 - 6654.425: 77.1844% ( 468) 00:07:33.682 6654.425 - 6704.837: 79.5778% ( 458) 00:07:33.682 6704.837 - 6755.249: 81.8510% ( 435) 00:07:33.682 6755.249 - 6805.662: 83.8942% ( 391) 00:07:33.682 6805.662 - 6856.074: 85.4881% ( 305) 00:07:33.682 6856.074 - 6906.486: 86.5646% ( 206) 00:07:33.683 6906.486 - 6956.898: 87.4164% ( 163) 00:07:33.683 6956.898 - 7007.311: 88.1323% ( 137) 00:07:33.683 7007.311 - 7057.723: 88.7385% ( 116) 00:07:33.683 7057.723 - 7108.135: 89.3813% ( 123) 00:07:33.683 7108.135 - 7158.548: 89.8046% ( 81) 00:07:33.683 7158.548 - 7208.960: 90.1077% ( 58) 00:07:33.683 7208.960 - 7259.372: 90.2853% ( 34) 00:07:33.683 7259.372 - 7309.785: 90.3898% ( 20) 00:07:33.683 7309.785 - 7360.197: 90.4839% ( 18) 00:07:33.683 7360.197 - 7410.609: 90.5832% ( 19) 00:07:33.683 7410.609 - 7461.022: 90.6668% ( 16) 00:07:33.683 7461.022 - 7511.434: 90.7661% ( 19) 00:07:33.683 7511.434 - 7561.846: 90.9020% ( 26) 00:07:33.683 7561.846 - 7612.258: 91.0222% ( 23) 00:07:33.683 7612.258 - 7662.671: 91.2416% ( 42) 00:07:33.683 7662.671 - 7713.083: 91.4402% ( 38) 00:07:33.683 7713.083 - 7763.495: 91.5761% ( 26) 00:07:33.683 7763.495 - 7813.908: 91.7694% ( 37) 00:07:33.683 7813.908 - 7864.320: 91.9419% ( 33) 00:07:33.683 7864.320 - 7914.732: 92.0307% ( 17) 00:07:33.683 7914.732 - 7965.145: 92.0934% ( 12) 00:07:33.683 7965.145 - 8015.557: 92.1561% ( 12) 00:07:33.683 8015.557 - 8065.969: 92.2502% ( 18) 00:07:33.683 8065.969 - 8116.382: 92.3443% ( 18) 00:07:33.683 8116.382 - 8166.794: 92.4383% ( 18) 00:07:33.683 8166.794 - 8217.206: 92.5376% ( 19) 00:07:33.683 8217.206 - 8267.618: 92.6630% ( 24) 00:07:33.683 8267.618 - 8318.031: 92.7676% ( 20) 00:07:33.683 8318.031 - 8368.443: 92.8668% ( 19) 00:07:33.683 8368.443 - 8418.855: 92.9661% ( 19) 00:07:33.683 8418.855 - 8469.268: 93.1020% ( 26) 00:07:33.683 8469.268 - 8519.680: 93.2379% ( 26) 00:07:33.683 8519.680 - 8570.092: 93.3476% ( 21) 00:07:33.683 8570.092 - 8620.505: 93.4521% ( 20) 00:07:33.683 8620.505 - 8670.917: 93.5723% ( 23) 00:07:33.683 8670.917 - 8721.329: 93.6821% ( 21) 00:07:33.683 8721.329 - 8771.742: 93.8232% ( 27) 00:07:33.683 8771.742 - 8822.154: 94.0322% ( 40) 00:07:33.683 8822.154 - 8872.566: 94.1733% ( 27) 00:07:33.683 8872.566 - 8922.978: 94.3039% ( 25) 00:07:33.683 8922.978 - 8973.391: 94.4346% ( 25) 00:07:33.683 8973.391 - 9023.803: 94.5757% ( 27) 00:07:33.683 9023.803 - 9074.215: 94.7586% ( 35) 00:07:33.683 9074.215 - 9124.628: 94.9415% ( 35) 00:07:33.683 9124.628 - 9175.040: 95.1296% ( 36) 00:07:33.683 9175.040 - 9225.452: 95.3125% ( 35) 00:07:33.683 9225.452 - 9275.865: 95.5059% ( 37) 00:07:33.683 9275.865 - 9326.277: 95.6888% ( 35) 00:07:33.683 9326.277 - 9376.689: 95.8612% ( 33) 00:07:33.683 9376.689 - 9427.102: 96.0441% ( 35) 00:07:33.683 9427.102 - 9477.514: 96.2009% ( 30) 00:07:33.683 9477.514 - 9527.926: 96.3367% ( 26) 00:07:33.683 9527.926 - 9578.338: 96.4883% ( 29) 00:07:33.683 9578.338 - 9628.751: 96.6085% ( 23) 00:07:33.683 9628.751 - 9679.163: 96.7391% ( 25) 00:07:33.683 9679.163 - 9729.575: 96.8489% ( 21) 00:07:33.683 9729.575 - 9779.988: 96.9586% ( 21) 00:07:33.683 9779.988 - 9830.400: 97.0318% ( 14) 00:07:33.683 9830.400 - 9880.812: 97.1049% ( 14) 00:07:33.683 9880.812 - 9931.225: 97.1729% ( 13) 00:07:33.683 9931.225 - 9981.637: 97.2565% ( 16) 00:07:33.683 9981.637 - 10032.049: 97.3192% ( 12) 00:07:33.683 10032.049 - 10082.462: 97.3714% ( 10) 00:07:33.683 10082.462 - 10132.874: 97.4028% ( 6) 00:07:33.683 10132.874 - 10183.286: 97.4133% ( 2) 00:07:33.683 10183.286 - 10233.698: 97.4289% ( 3) 00:07:33.683 10233.698 - 10284.111: 97.4394% ( 2) 00:07:33.683 10284.111 - 10334.523: 97.4551% ( 3) 00:07:33.683 10334.523 - 10384.935: 97.4655% ( 2) 00:07:33.683 10384.935 - 10435.348: 97.4812% ( 3) 00:07:33.683 10435.348 - 10485.760: 97.4916% ( 2) 00:07:33.683 10485.760 - 10536.172: 97.5021% ( 2) 00:07:33.683 10536.172 - 10586.585: 97.5178% ( 3) 00:07:33.683 10586.585 - 10636.997: 97.5282% ( 2) 00:07:33.683 10636.997 - 10687.409: 97.5648% ( 7) 00:07:33.683 10687.409 - 10737.822: 97.6432% ( 15) 00:07:33.683 10737.822 - 10788.234: 97.7059% ( 12) 00:07:33.683 10788.234 - 10838.646: 97.7425% ( 7) 00:07:33.683 10838.646 - 10889.058: 97.7791% ( 7) 00:07:33.683 10889.058 - 10939.471: 97.8104% ( 6) 00:07:33.683 10939.471 - 10989.883: 97.8470% ( 7) 00:07:33.683 10989.883 - 11040.295: 97.8783% ( 6) 00:07:33.683 11040.295 - 11090.708: 97.9045% ( 5) 00:07:33.683 11090.708 - 11141.120: 97.9358% ( 6) 00:07:33.683 11141.120 - 11191.532: 97.9620% ( 5) 00:07:33.683 11191.532 - 11241.945: 97.9829% ( 4) 00:07:33.683 11241.945 - 11292.357: 98.0038% ( 4) 00:07:33.683 11292.357 - 11342.769: 98.0351% ( 6) 00:07:33.683 11342.769 - 11393.182: 98.0612% ( 5) 00:07:33.683 11393.182 - 11443.594: 98.0821% ( 4) 00:07:33.683 11443.594 - 11494.006: 98.1083% ( 5) 00:07:33.683 11494.006 - 11544.418: 98.1344% ( 5) 00:07:33.683 11544.418 - 11594.831: 98.1605% ( 5) 00:07:33.683 11594.831 - 11645.243: 98.1867% ( 5) 00:07:33.683 11645.243 - 11695.655: 98.2076% ( 4) 00:07:33.683 11695.655 - 11746.068: 98.2337% ( 5) 00:07:33.683 11746.068 - 11796.480: 98.2546% ( 4) 00:07:33.683 11796.480 - 11846.892: 98.2703% ( 3) 00:07:33.683 11846.892 - 11897.305: 98.2807% ( 2) 00:07:33.683 11897.305 - 11947.717: 98.3121% ( 6) 00:07:33.683 11947.717 - 11998.129: 98.3591% ( 9) 00:07:33.683 11998.129 - 12048.542: 98.4009% ( 8) 00:07:33.683 12048.542 - 12098.954: 98.4323% ( 6) 00:07:33.683 12098.954 - 12149.366: 98.4532% ( 4) 00:07:33.683 12149.366 - 12199.778: 98.4898% ( 7) 00:07:33.683 12199.778 - 12250.191: 98.5107% ( 4) 00:07:33.683 12250.191 - 12300.603: 98.5263% ( 3) 00:07:33.683 12300.603 - 12351.015: 98.5472% ( 4) 00:07:33.683 12351.015 - 12401.428: 98.5786% ( 6) 00:07:33.683 12401.428 - 12451.840: 98.5943% ( 3) 00:07:33.683 12451.840 - 12502.252: 98.6152% ( 4) 00:07:33.683 12502.252 - 12552.665: 98.6256% ( 2) 00:07:33.683 12552.665 - 12603.077: 98.6413% ( 3) 00:07:33.683 12603.077 - 12653.489: 98.6622% ( 4) 00:07:33.683 13107.200 - 13208.025: 98.6727% ( 2) 00:07:33.683 13208.025 - 13308.849: 98.7092% ( 7) 00:07:33.683 13308.849 - 13409.674: 98.7354% ( 5) 00:07:33.683 13409.674 - 13510.498: 98.7667% ( 6) 00:07:33.683 13510.498 - 13611.323: 98.7981% ( 6) 00:07:33.683 13611.323 - 13712.148: 98.8294% ( 6) 00:07:33.683 13712.148 - 13812.972: 98.8660% ( 7) 00:07:33.683 13812.972 - 13913.797: 98.8974% ( 6) 00:07:33.683 13913.797 - 14014.622: 98.9287% ( 6) 00:07:33.683 14014.622 - 14115.446: 98.9653% ( 7) 00:07:33.683 14115.446 - 14216.271: 99.0228% ( 11) 00:07:33.683 14216.271 - 14317.095: 99.0489% ( 5) 00:07:33.683 14317.095 - 14417.920: 99.0803% ( 6) 00:07:33.683 14417.920 - 14518.745: 99.1064% ( 5) 00:07:33.683 14518.745 - 14619.569: 99.1430% ( 7) 00:07:33.683 14619.569 - 14720.394: 99.1691% ( 5) 00:07:33.683 14720.394 - 14821.218: 99.2057% ( 7) 00:07:33.683 14821.218 - 14922.043: 99.2370% ( 6) 00:07:33.683 14922.043 - 15022.868: 99.2684% ( 6) 00:07:33.683 15022.868 - 15123.692: 99.2945% ( 5) 00:07:33.683 15123.692 - 15224.517: 99.3259% ( 6) 00:07:33.683 15224.517 - 15325.342: 99.3311% ( 1) 00:07:33.683 19660.800 - 19761.625: 99.3468% ( 3) 00:07:33.683 19761.625 - 19862.449: 99.3677% ( 4) 00:07:33.683 19862.449 - 19963.274: 99.3886% ( 4) 00:07:33.683 19963.274 - 20064.098: 99.4095% ( 4) 00:07:33.683 20064.098 - 20164.923: 99.4304% ( 4) 00:07:33.683 20164.923 - 20265.748: 99.4513% ( 4) 00:07:33.683 20265.748 - 20366.572: 99.4722% ( 4) 00:07:33.683 20366.572 - 20467.397: 99.4931% ( 4) 00:07:33.683 20467.397 - 20568.222: 99.5140% ( 4) 00:07:33.683 20568.222 - 20669.046: 99.5349% ( 4) 00:07:33.683 20669.046 - 20769.871: 99.5610% ( 5) 00:07:33.683 20769.871 - 20870.695: 99.5819% ( 4) 00:07:33.683 20870.695 - 20971.520: 99.6028% ( 4) 00:07:33.683 20971.520 - 21072.345: 99.6237% ( 4) 00:07:33.683 21072.345 - 21173.169: 99.6446% ( 4) 00:07:33.683 21173.169 - 21273.994: 99.6656% ( 4) 00:07:33.683 25206.154 - 25306.978: 99.6760% ( 2) 00:07:33.683 25306.978 - 25407.803: 99.6969% ( 4) 00:07:33.683 25407.803 - 25508.628: 99.7178% ( 4) 00:07:33.683 25508.628 - 25609.452: 99.7387% ( 4) 00:07:33.683 25609.452 - 25710.277: 99.7596% ( 4) 00:07:33.683 25710.277 - 25811.102: 99.7805% ( 4) 00:07:33.683 25811.102 - 26012.751: 99.8223% ( 8) 00:07:33.683 26012.751 - 26214.400: 99.8641% ( 8) 00:07:33.683 26214.400 - 26416.049: 99.9112% ( 9) 00:07:33.683 26416.049 - 26617.698: 99.9530% ( 8) 00:07:33.683 26617.698 - 26819.348: 99.9948% ( 8) 00:07:33.683 26819.348 - 27020.997: 100.0000% ( 1) 00:07:33.683 00:07:33.683 23:28:21 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:35.064 Initializing NVMe Controllers 00:07:35.064 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:35.064 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:35.064 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:35.064 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:35.064 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:35.064 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:35.064 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:35.064 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:35.065 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:35.065 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:35.065 Initialization complete. Launching workers. 00:07:35.065 ======================================================== 00:07:35.065 Latency(us) 00:07:35.065 Device Information : IOPS MiB/s Average min max 00:07:35.065 PCIE (0000:00:10.0) NSID 1 from core 0: 17386.37 203.75 7367.95 5267.57 33116.01 00:07:35.065 PCIE (0000:00:11.0) NSID 1 from core 0: 17386.37 203.75 7348.74 5366.33 30127.73 00:07:35.065 PCIE (0000:00:13.0) NSID 1 from core 0: 17386.37 203.75 7328.85 5334.37 27281.27 00:07:35.065 PCIE (0000:00:12.0) NSID 1 from core 0: 17386.37 203.75 7312.57 5360.78 24283.73 00:07:35.065 PCIE (0000:00:12.0) NSID 2 from core 0: 17386.37 203.75 7301.32 5357.19 22367.17 00:07:35.065 PCIE (0000:00:12.0) NSID 3 from core 0: 17450.29 204.50 7263.48 5316.04 17684.03 00:07:35.065 ======================================================== 00:07:35.065 Total : 104382.15 1223.23 7320.45 5267.57 33116.01 00:07:35.065 00:07:35.065 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:35.065 ================================================================================= 00:07:35.065 1.00000% : 5520.148us 00:07:35.065 10.00000% : 5898.240us 00:07:35.065 25.00000% : 6150.302us 00:07:35.065 50.00000% : 6503.188us 00:07:35.065 75.00000% : 7309.785us 00:07:35.065 90.00000% : 9931.225us 00:07:35.065 95.00000% : 12451.840us 00:07:35.065 98.00000% : 13712.148us 00:07:35.065 99.00000% : 15526.991us 00:07:35.065 99.50000% : 27020.997us 00:07:35.065 99.90000% : 32667.175us 00:07:35.065 99.99000% : 33070.474us 00:07:35.065 99.99900% : 33272.123us 00:07:35.065 99.99990% : 33272.123us 00:07:35.065 99.99999% : 33272.123us 00:07:35.065 00:07:35.065 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:35.065 ================================================================================= 00:07:35.065 1.00000% : 5696.591us 00:07:35.065 10.00000% : 5923.446us 00:07:35.065 25.00000% : 6175.508us 00:07:35.065 50.00000% : 6503.188us 00:07:35.065 75.00000% : 7360.197us 00:07:35.065 90.00000% : 9880.812us 00:07:35.065 95.00000% : 11947.717us 00:07:35.065 98.00000% : 13712.148us 00:07:35.065 99.00000% : 15526.991us 00:07:35.065 99.50000% : 24399.557us 00:07:35.065 99.90000% : 29642.437us 00:07:35.065 99.99000% : 30247.385us 00:07:35.065 99.99900% : 30247.385us 00:07:35.065 99.99990% : 30247.385us 00:07:35.065 99.99999% : 30247.385us 00:07:35.065 00:07:35.065 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:35.065 ================================================================================= 00:07:35.065 1.00000% : 5671.385us 00:07:35.065 10.00000% : 5898.240us 00:07:35.065 25.00000% : 6175.508us 00:07:35.065 50.00000% : 6503.188us 00:07:35.065 75.00000% : 7259.372us 00:07:35.065 90.00000% : 9729.575us 00:07:35.065 95.00000% : 12048.542us 00:07:35.065 98.00000% : 14014.622us 00:07:35.065 99.00000% : 15224.517us 00:07:35.065 99.50000% : 21878.942us 00:07:35.065 99.90000% : 26819.348us 00:07:35.065 99.99000% : 27424.295us 00:07:35.065 99.99900% : 27424.295us 00:07:35.065 99.99990% : 27424.295us 00:07:35.065 99.99999% : 27424.295us 00:07:35.065 00:07:35.065 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:35.065 ================================================================================= 00:07:35.065 1.00000% : 5671.385us 00:07:35.065 10.00000% : 5923.446us 00:07:35.065 25.00000% : 6175.508us 00:07:35.065 50.00000% : 6503.188us 00:07:35.065 75.00000% : 7360.197us 00:07:35.065 90.00000% : 9779.988us 00:07:35.065 95.00000% : 12048.542us 00:07:35.065 98.00000% : 13913.797us 00:07:35.065 99.00000% : 14821.218us 00:07:35.065 99.50000% : 19660.800us 00:07:35.065 99.90000% : 23794.609us 00:07:35.065 99.99000% : 24298.732us 00:07:35.065 99.99900% : 24298.732us 00:07:35.065 99.99990% : 24298.732us 00:07:35.065 99.99999% : 24298.732us 00:07:35.065 00:07:35.065 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:35.065 ================================================================================= 00:07:35.065 1.00000% : 5671.385us 00:07:35.065 10.00000% : 5898.240us 00:07:35.065 25.00000% : 6175.508us 00:07:35.065 50.00000% : 6503.188us 00:07:35.065 75.00000% : 7360.197us 00:07:35.065 90.00000% : 9729.575us 00:07:35.065 95.00000% : 12199.778us 00:07:35.065 98.00000% : 14014.622us 00:07:35.065 99.00000% : 15022.868us 00:07:35.065 99.50000% : 17845.957us 00:07:35.065 99.90000% : 21979.766us 00:07:35.065 99.99000% : 22383.065us 00:07:35.065 99.99900% : 22383.065us 00:07:35.065 99.99990% : 22383.065us 00:07:35.065 99.99999% : 22383.065us 00:07:35.065 00:07:35.065 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:35.065 ================================================================================= 00:07:35.065 1.00000% : 5696.591us 00:07:35.065 10.00000% : 5898.240us 00:07:35.065 25.00000% : 6175.508us 00:07:35.065 50.00000% : 6503.188us 00:07:35.065 75.00000% : 7309.785us 00:07:35.065 90.00000% : 9779.988us 00:07:35.065 95.00000% : 12351.015us 00:07:35.065 98.00000% : 13510.498us 00:07:35.065 99.00000% : 14518.745us 00:07:35.065 99.50000% : 15224.517us 00:07:35.065 99.90000% : 17341.834us 00:07:35.065 99.99000% : 17745.132us 00:07:35.065 99.99900% : 17745.132us 00:07:35.065 99.99990% : 17745.132us 00:07:35.065 99.99999% : 17745.132us 00:07:35.065 00:07:35.065 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:35.065 ============================================================================== 00:07:35.065 Range in us Cumulative IO count 00:07:35.065 5242.880 - 5268.086: 0.0057% ( 1) 00:07:35.065 5293.292 - 5318.498: 0.0402% ( 6) 00:07:35.065 5318.498 - 5343.705: 0.1149% ( 13) 00:07:35.065 5343.705 - 5368.911: 0.1723% ( 10) 00:07:35.065 5368.911 - 5394.117: 0.2125% ( 7) 00:07:35.065 5394.117 - 5419.323: 0.2930% ( 14) 00:07:35.065 5419.323 - 5444.529: 0.4653% ( 30) 00:07:35.065 5444.529 - 5469.735: 0.6721% ( 36) 00:07:35.065 5469.735 - 5494.942: 0.8732% ( 35) 00:07:35.065 5494.942 - 5520.148: 1.1489% ( 48) 00:07:35.065 5520.148 - 5545.354: 1.4648% ( 55) 00:07:35.065 5545.354 - 5570.560: 1.7923% ( 57) 00:07:35.065 5570.560 - 5595.766: 2.1082% ( 55) 00:07:35.065 5595.766 - 5620.972: 2.5506% ( 77) 00:07:35.065 5620.972 - 5646.178: 2.9699% ( 73) 00:07:35.065 5646.178 - 5671.385: 3.3950% ( 74) 00:07:35.065 5671.385 - 5696.591: 3.9120% ( 90) 00:07:35.065 5696.591 - 5721.797: 4.4807% ( 99) 00:07:35.065 5721.797 - 5747.003: 5.2562% ( 135) 00:07:35.065 5747.003 - 5772.209: 5.9513% ( 121) 00:07:35.065 5772.209 - 5797.415: 6.6981% ( 130) 00:07:35.065 5797.415 - 5822.622: 7.5827% ( 154) 00:07:35.065 5822.622 - 5847.828: 8.4616% ( 153) 00:07:35.065 5847.828 - 5873.034: 9.4152% ( 166) 00:07:35.065 5873.034 - 5898.240: 10.6101% ( 208) 00:07:35.065 5898.240 - 5923.446: 11.8222% ( 211) 00:07:35.065 5923.446 - 5948.652: 13.0515% ( 214) 00:07:35.065 5948.652 - 5973.858: 14.2923% ( 216) 00:07:35.065 5973.858 - 5999.065: 15.6882% ( 243) 00:07:35.065 5999.065 - 6024.271: 16.9405% ( 218) 00:07:35.065 6024.271 - 6049.477: 18.3192% ( 240) 00:07:35.065 6049.477 - 6074.683: 20.1057% ( 311) 00:07:35.065 6074.683 - 6099.889: 21.8807% ( 309) 00:07:35.066 6099.889 - 6125.095: 23.7649% ( 328) 00:07:35.066 6125.095 - 6150.302: 25.8215% ( 358) 00:07:35.066 6150.302 - 6175.508: 27.5046% ( 293) 00:07:35.066 6175.508 - 6200.714: 29.4290% ( 335) 00:07:35.066 6200.714 - 6225.920: 31.1121% ( 293) 00:07:35.066 6225.920 - 6251.126: 32.9791% ( 325) 00:07:35.066 6251.126 - 6276.332: 34.9092% ( 336) 00:07:35.066 6276.332 - 6301.538: 36.9256% ( 351) 00:07:35.066 6301.538 - 6326.745: 39.1544% ( 388) 00:07:35.066 6326.745 - 6351.951: 41.4694% ( 403) 00:07:35.066 6351.951 - 6377.157: 43.7845% ( 403) 00:07:35.066 6377.157 - 6402.363: 45.7835% ( 348) 00:07:35.066 6402.363 - 6427.569: 47.4035% ( 282) 00:07:35.066 6427.569 - 6452.775: 48.9775% ( 274) 00:07:35.066 6452.775 - 6503.188: 51.8095% ( 493) 00:07:35.066 6503.188 - 6553.600: 54.2567% ( 426) 00:07:35.066 6553.600 - 6604.012: 56.5085% ( 392) 00:07:35.066 6604.012 - 6654.425: 58.6397% ( 371) 00:07:35.066 6654.425 - 6704.837: 60.5067% ( 325) 00:07:35.066 6704.837 - 6755.249: 62.4943% ( 346) 00:07:35.066 6755.249 - 6805.662: 64.2750% ( 310) 00:07:35.066 6805.662 - 6856.074: 66.0788% ( 314) 00:07:35.066 6856.074 - 6906.486: 67.6413% ( 272) 00:07:35.066 6906.486 - 6956.898: 68.7385% ( 191) 00:07:35.066 6956.898 - 7007.311: 69.9908% ( 218) 00:07:35.066 7007.311 - 7057.723: 71.1914% ( 209) 00:07:35.066 7057.723 - 7108.135: 72.0990% ( 158) 00:07:35.066 7108.135 - 7158.548: 72.9377% ( 146) 00:07:35.066 7158.548 - 7208.960: 73.6730% ( 128) 00:07:35.066 7208.960 - 7259.372: 74.2877% ( 107) 00:07:35.066 7259.372 - 7309.785: 75.0804% ( 138) 00:07:35.066 7309.785 - 7360.197: 75.8387% ( 132) 00:07:35.066 7360.197 - 7410.609: 76.3040% ( 81) 00:07:35.066 7410.609 - 7461.022: 76.8325% ( 92) 00:07:35.066 7461.022 - 7511.434: 77.4127% ( 101) 00:07:35.066 7511.434 - 7561.846: 77.7803% ( 64) 00:07:35.066 7561.846 - 7612.258: 78.1307% ( 61) 00:07:35.066 7612.258 - 7662.671: 78.5443% ( 72) 00:07:35.066 7662.671 - 7713.083: 78.9292% ( 67) 00:07:35.066 7713.083 - 7763.495: 79.3256% ( 69) 00:07:35.066 7763.495 - 7813.908: 79.7335% ( 71) 00:07:35.066 7813.908 - 7864.320: 80.0092% ( 48) 00:07:35.066 7864.320 - 7914.732: 80.2505% ( 42) 00:07:35.066 7914.732 - 7965.145: 80.4056% ( 27) 00:07:35.066 7965.145 - 8015.557: 80.5607% ( 27) 00:07:35.066 8015.557 - 8065.969: 80.7273% ( 29) 00:07:35.066 8065.969 - 8116.382: 80.8709% ( 25) 00:07:35.066 8116.382 - 8166.794: 80.9915% ( 21) 00:07:35.066 8166.794 - 8217.206: 81.1581% ( 29) 00:07:35.066 8217.206 - 8267.618: 81.3419% ( 32) 00:07:35.066 8267.618 - 8318.031: 81.5717% ( 40) 00:07:35.066 8318.031 - 8368.443: 81.7038% ( 23) 00:07:35.066 8368.443 - 8418.855: 81.8244% ( 21) 00:07:35.066 8418.855 - 8469.268: 81.9566% ( 23) 00:07:35.066 8469.268 - 8519.680: 82.1174% ( 28) 00:07:35.066 8519.680 - 8570.092: 82.2840% ( 29) 00:07:35.066 8570.092 - 8620.505: 82.4621% ( 31) 00:07:35.066 8620.505 - 8670.917: 82.5712% ( 19) 00:07:35.066 8670.917 - 8721.329: 82.7436% ( 30) 00:07:35.066 8721.329 - 8771.742: 82.9561% ( 37) 00:07:35.066 8771.742 - 8822.154: 83.2835% ( 57) 00:07:35.066 8822.154 - 8872.566: 83.5880% ( 53) 00:07:35.066 8872.566 - 8922.978: 83.9499% ( 63) 00:07:35.066 8922.978 - 8973.391: 84.3463% ( 69) 00:07:35.066 8973.391 - 9023.803: 84.7541% ( 71) 00:07:35.066 9023.803 - 9074.215: 85.0931% ( 59) 00:07:35.066 9074.215 - 9124.628: 85.5871% ( 86) 00:07:35.066 9124.628 - 9175.040: 85.9547% ( 64) 00:07:35.066 9175.040 - 9225.452: 86.3051% ( 61) 00:07:35.066 9225.452 - 9275.865: 86.6268% ( 56) 00:07:35.066 9275.865 - 9326.277: 86.8796% ( 44) 00:07:35.066 9326.277 - 9376.689: 87.1611% ( 49) 00:07:35.066 9376.689 - 9427.102: 87.4196% ( 45) 00:07:35.066 9427.102 - 9477.514: 87.7355% ( 55) 00:07:35.066 9477.514 - 9527.926: 87.9825% ( 43) 00:07:35.066 9527.926 - 9578.338: 88.3042% ( 56) 00:07:35.066 9578.338 - 9628.751: 88.5857% ( 49) 00:07:35.066 9628.751 - 9679.163: 88.8442% ( 45) 00:07:35.066 9679.163 - 9729.575: 89.1774% ( 58) 00:07:35.066 9729.575 - 9779.988: 89.4589% ( 49) 00:07:35.066 9779.988 - 9830.400: 89.7863% ( 57) 00:07:35.066 9830.400 - 9880.812: 89.9874% ( 35) 00:07:35.066 9880.812 - 9931.225: 90.1827% ( 34) 00:07:35.066 9931.225 - 9981.637: 90.3205% ( 24) 00:07:35.066 9981.637 - 10032.049: 90.3952% ( 13) 00:07:35.066 10032.049 - 10082.462: 90.4584% ( 11) 00:07:35.066 10082.462 - 10132.874: 90.5733% ( 20) 00:07:35.066 10132.874 - 10183.286: 90.7112% ( 24) 00:07:35.066 10183.286 - 10233.698: 90.9180% ( 36) 00:07:35.066 10233.698 - 10284.111: 91.1133% ( 34) 00:07:35.066 10284.111 - 10334.523: 91.2224% ( 19) 00:07:35.066 10334.523 - 10384.935: 91.3660% ( 25) 00:07:35.066 10384.935 - 10435.348: 91.5384% ( 30) 00:07:35.066 10435.348 - 10485.760: 91.6475% ( 19) 00:07:35.066 10485.760 - 10536.172: 91.7394% ( 16) 00:07:35.066 10536.172 - 10586.585: 91.8428% ( 18) 00:07:35.066 10586.585 - 10636.997: 91.8945% ( 9) 00:07:35.066 10636.997 - 10687.409: 92.0439% ( 26) 00:07:35.066 10687.409 - 10737.822: 92.1473% ( 18) 00:07:35.066 10737.822 - 10788.234: 92.2277% ( 14) 00:07:35.066 10788.234 - 10838.646: 92.3024% ( 13) 00:07:35.066 10838.646 - 10889.058: 92.4058% ( 18) 00:07:35.066 10889.058 - 10939.471: 92.5149% ( 19) 00:07:35.066 10939.471 - 10989.883: 92.6471% ( 23) 00:07:35.066 10989.883 - 11040.295: 92.8768% ( 40) 00:07:35.066 11040.295 - 11090.708: 93.1353% ( 45) 00:07:35.066 11090.708 - 11141.120: 93.2445% ( 19) 00:07:35.066 11141.120 - 11191.532: 93.3019% ( 10) 00:07:35.066 11191.532 - 11241.945: 93.3479% ( 8) 00:07:35.066 11241.945 - 11292.357: 93.3996% ( 9) 00:07:35.066 11292.357 - 11342.769: 93.4800% ( 14) 00:07:35.066 11342.769 - 11393.182: 93.5719% ( 16) 00:07:35.066 11393.182 - 11443.594: 93.6581% ( 15) 00:07:35.066 11443.594 - 11494.006: 93.7443% ( 15) 00:07:35.066 11494.006 - 11544.418: 93.7902% ( 8) 00:07:35.066 11544.418 - 11594.831: 93.8132% ( 4) 00:07:35.066 11594.831 - 11645.243: 93.8764% ( 11) 00:07:35.066 11645.243 - 11695.655: 93.9453% ( 12) 00:07:35.066 11695.655 - 11746.068: 94.0315% ( 15) 00:07:35.066 11746.068 - 11796.480: 94.0832% ( 9) 00:07:35.066 11796.480 - 11846.892: 94.1349% ( 9) 00:07:35.066 11846.892 - 11897.305: 94.1751% ( 7) 00:07:35.066 11897.305 - 11947.717: 94.2096% ( 6) 00:07:35.066 11947.717 - 11998.129: 94.2555% ( 8) 00:07:35.066 11998.129 - 12048.542: 94.3130% ( 10) 00:07:35.066 12048.542 - 12098.954: 94.3876% ( 13) 00:07:35.066 12098.954 - 12149.366: 94.4623% ( 13) 00:07:35.066 12149.366 - 12199.778: 94.5830% ( 21) 00:07:35.066 12199.778 - 12250.191: 94.6806% ( 17) 00:07:35.066 12250.191 - 12300.603: 94.7438% ( 11) 00:07:35.066 12300.603 - 12351.015: 94.8242% ( 14) 00:07:35.066 12351.015 - 12401.428: 94.9276% ( 18) 00:07:35.066 12401.428 - 12451.840: 95.0310% ( 18) 00:07:35.066 12451.840 - 12502.252: 95.0942% ( 11) 00:07:35.066 12502.252 - 12552.665: 95.1631% ( 12) 00:07:35.066 12552.665 - 12603.077: 95.2321% ( 12) 00:07:35.066 12603.077 - 12653.489: 95.2953% ( 11) 00:07:35.066 12653.489 - 12703.902: 95.3757% ( 14) 00:07:35.066 12703.902 - 12754.314: 95.4733% ( 17) 00:07:35.066 12754.314 - 12804.726: 95.5825% ( 19) 00:07:35.066 12804.726 - 12855.138: 95.6974% ( 20) 00:07:35.067 12855.138 - 12905.551: 95.7893% ( 16) 00:07:35.067 12905.551 - 13006.375: 96.0191% ( 40) 00:07:35.067 13006.375 - 13107.200: 96.3408% ( 56) 00:07:35.067 13107.200 - 13208.025: 96.6510% ( 54) 00:07:35.067 13208.025 - 13308.849: 96.9210% ( 47) 00:07:35.067 13308.849 - 13409.674: 97.1737% ( 44) 00:07:35.067 13409.674 - 13510.498: 97.5184% ( 60) 00:07:35.067 13510.498 - 13611.323: 97.8688% ( 61) 00:07:35.067 13611.323 - 13712.148: 98.1905% ( 56) 00:07:35.067 13712.148 - 13812.972: 98.3973% ( 36) 00:07:35.067 13812.972 - 13913.797: 98.5811% ( 32) 00:07:35.067 13913.797 - 14014.622: 98.6673% ( 15) 00:07:35.067 14014.622 - 14115.446: 98.7190% ( 9) 00:07:35.067 14115.446 - 14216.271: 98.7592% ( 7) 00:07:35.067 14216.271 - 14317.095: 98.7822% ( 4) 00:07:35.067 14317.095 - 14417.920: 98.8051% ( 4) 00:07:35.067 14417.920 - 14518.745: 98.8281% ( 4) 00:07:35.067 14518.745 - 14619.569: 98.8626% ( 6) 00:07:35.067 14619.569 - 14720.394: 98.8798% ( 3) 00:07:35.067 14720.394 - 14821.218: 98.8913% ( 2) 00:07:35.067 14821.218 - 14922.043: 98.8971% ( 1) 00:07:35.067 15123.692 - 15224.517: 98.9028% ( 1) 00:07:35.067 15224.517 - 15325.342: 98.9143% ( 2) 00:07:35.067 15325.342 - 15426.166: 98.9430% ( 5) 00:07:35.067 15426.166 - 15526.991: 99.1383% ( 34) 00:07:35.067 15526.991 - 15627.815: 99.1900% ( 9) 00:07:35.067 15627.815 - 15728.640: 99.2245% ( 6) 00:07:35.067 15728.640 - 15829.465: 99.2532% ( 5) 00:07:35.067 15829.465 - 15930.289: 99.2647% ( 2) 00:07:35.067 26012.751 - 26214.400: 99.2877% ( 4) 00:07:35.067 26214.400 - 26416.049: 99.3336% ( 8) 00:07:35.067 26416.049 - 26617.698: 99.3681% ( 6) 00:07:35.067 26617.698 - 26819.348: 99.4313% ( 11) 00:07:35.067 26819.348 - 27020.997: 99.5002% ( 12) 00:07:35.067 27020.997 - 27222.646: 99.5404% ( 7) 00:07:35.067 27222.646 - 27424.295: 99.5634% ( 4) 00:07:35.067 27424.295 - 27625.945: 99.6036% ( 7) 00:07:35.067 27625.945 - 27827.594: 99.6209% ( 3) 00:07:35.067 27827.594 - 28029.243: 99.6324% ( 2) 00:07:35.067 30852.332 - 31053.982: 99.6553% ( 4) 00:07:35.067 31053.982 - 31255.631: 99.6955% ( 7) 00:07:35.067 31255.631 - 31457.280: 99.7300% ( 6) 00:07:35.067 31457.280 - 31658.929: 99.7587% ( 5) 00:07:35.067 31658.929 - 31860.578: 99.7932% ( 6) 00:07:35.067 31860.578 - 32062.228: 99.8219% ( 5) 00:07:35.067 32062.228 - 32263.877: 99.8621% ( 7) 00:07:35.067 32263.877 - 32465.526: 99.8909% ( 5) 00:07:35.067 32465.526 - 32667.175: 99.9253% ( 6) 00:07:35.067 32667.175 - 32868.825: 99.9655% ( 7) 00:07:35.067 32868.825 - 33070.474: 99.9943% ( 5) 00:07:35.067 33070.474 - 33272.123: 100.0000% ( 1) 00:07:35.067 00:07:35.067 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:35.067 ============================================================================== 00:07:35.067 Range in us Cumulative IO count 00:07:35.067 5343.705 - 5368.911: 0.0057% ( 1) 00:07:35.067 5419.323 - 5444.529: 0.0115% ( 1) 00:07:35.067 5469.735 - 5494.942: 0.0172% ( 1) 00:07:35.067 5494.942 - 5520.148: 0.0402% ( 4) 00:07:35.067 5520.148 - 5545.354: 0.0517% ( 2) 00:07:35.067 5545.354 - 5570.560: 0.0977% ( 8) 00:07:35.067 5570.560 - 5595.766: 0.1379% ( 7) 00:07:35.067 5595.766 - 5620.972: 0.2528% ( 20) 00:07:35.067 5620.972 - 5646.178: 0.4193% ( 29) 00:07:35.067 5646.178 - 5671.385: 0.8617% ( 77) 00:07:35.067 5671.385 - 5696.591: 1.4246% ( 98) 00:07:35.067 5696.591 - 5721.797: 2.3495% ( 161) 00:07:35.067 5721.797 - 5747.003: 3.3375% ( 172) 00:07:35.067 5747.003 - 5772.209: 4.8713% ( 267) 00:07:35.067 5772.209 - 5797.415: 5.9398% ( 186) 00:07:35.067 5797.415 - 5822.622: 6.9049% ( 168) 00:07:35.067 5822.622 - 5847.828: 7.7895% ( 154) 00:07:35.067 5847.828 - 5873.034: 8.8867% ( 191) 00:07:35.067 5873.034 - 5898.240: 9.8977% ( 176) 00:07:35.067 5898.240 - 5923.446: 10.9490% ( 183) 00:07:35.067 5923.446 - 5948.652: 11.9198% ( 169) 00:07:35.067 5948.652 - 5973.858: 12.9079% ( 172) 00:07:35.067 5973.858 - 5999.065: 14.4301% ( 265) 00:07:35.067 5999.065 - 6024.271: 15.8146% ( 241) 00:07:35.067 6024.271 - 6049.477: 17.0669% ( 218) 00:07:35.067 6049.477 - 6074.683: 18.6121% ( 269) 00:07:35.067 6074.683 - 6099.889: 20.6055% ( 347) 00:07:35.067 6099.889 - 6125.095: 22.3518% ( 304) 00:07:35.067 6125.095 - 6150.302: 24.3509% ( 348) 00:07:35.067 6150.302 - 6175.508: 26.4189% ( 360) 00:07:35.067 6175.508 - 6200.714: 28.3950% ( 344) 00:07:35.067 6200.714 - 6225.920: 30.6985% ( 401) 00:07:35.067 6225.920 - 6251.126: 32.2381% ( 268) 00:07:35.067 6251.126 - 6276.332: 34.0303% ( 312) 00:07:35.067 6276.332 - 6301.538: 35.6847% ( 288) 00:07:35.067 6301.538 - 6326.745: 37.3506% ( 290) 00:07:35.067 6326.745 - 6351.951: 39.4014% ( 357) 00:07:35.067 6351.951 - 6377.157: 41.2282% ( 318) 00:07:35.067 6377.157 - 6402.363: 43.2962% ( 360) 00:07:35.067 6402.363 - 6427.569: 45.3068% ( 350) 00:07:35.067 6427.569 - 6452.775: 47.3920% ( 363) 00:07:35.067 6452.775 - 6503.188: 51.5568% ( 725) 00:07:35.067 6503.188 - 6553.600: 55.0609% ( 610) 00:07:35.067 6553.600 - 6604.012: 58.3812% ( 578) 00:07:35.067 6604.012 - 6654.425: 61.3339% ( 514) 00:07:35.067 6654.425 - 6704.837: 63.3789% ( 356) 00:07:35.067 6704.837 - 6755.249: 65.1540% ( 309) 00:07:35.067 6755.249 - 6805.662: 66.8084% ( 288) 00:07:35.067 6805.662 - 6856.074: 67.9170% ( 193) 00:07:35.067 6856.074 - 6906.486: 68.8247% ( 158) 00:07:35.067 6906.486 - 6956.898: 69.5944% ( 134) 00:07:35.067 6956.898 - 7007.311: 70.5136% ( 160) 00:07:35.067 7007.311 - 7057.723: 71.2374% ( 126) 00:07:35.067 7057.723 - 7108.135: 71.9899% ( 131) 00:07:35.067 7108.135 - 7158.548: 72.7884% ( 139) 00:07:35.067 7158.548 - 7208.960: 73.5352% ( 130) 00:07:35.067 7208.960 - 7259.372: 74.2877% ( 131) 00:07:35.067 7259.372 - 7309.785: 74.9483% ( 115) 00:07:35.067 7309.785 - 7360.197: 75.8847% ( 163) 00:07:35.067 7360.197 - 7410.609: 76.8038% ( 160) 00:07:35.067 7410.609 - 7461.022: 77.3955% ( 103) 00:07:35.067 7461.022 - 7511.434: 77.9527% ( 97) 00:07:35.067 7511.434 - 7561.846: 78.4065% ( 79) 00:07:35.067 7561.846 - 7612.258: 78.7052% ( 52) 00:07:35.067 7612.258 - 7662.671: 79.0901% ( 67) 00:07:35.067 7662.671 - 7713.083: 79.2796% ( 33) 00:07:35.067 7713.083 - 7763.495: 79.5898% ( 54) 00:07:35.067 7763.495 - 7813.908: 79.7737% ( 32) 00:07:35.067 7813.908 - 7864.320: 79.8943% ( 21) 00:07:35.067 7864.320 - 7914.732: 80.0149% ( 21) 00:07:35.067 7914.732 - 7965.145: 80.1068% ( 16) 00:07:35.067 7965.145 - 8015.557: 80.1758% ( 12) 00:07:35.067 8015.557 - 8065.969: 80.2505% ( 13) 00:07:35.067 8065.969 - 8116.382: 80.3711% ( 21) 00:07:35.067 8116.382 - 8166.794: 80.4630% ( 16) 00:07:35.067 8166.794 - 8217.206: 80.5377% ( 13) 00:07:35.067 8217.206 - 8267.618: 80.6641% ( 22) 00:07:35.067 8267.618 - 8318.031: 80.8192% ( 27) 00:07:35.067 8318.031 - 8368.443: 80.9743% ( 27) 00:07:35.067 8368.443 - 8418.855: 81.1351% ( 28) 00:07:35.067 8418.855 - 8469.268: 81.3247% ( 33) 00:07:35.067 8469.268 - 8519.680: 81.4970% ( 30) 00:07:35.067 8519.680 - 8570.092: 81.7440% ( 43) 00:07:35.067 8570.092 - 8620.505: 81.9278% ( 32) 00:07:35.067 8620.505 - 8670.917: 82.0772% ( 26) 00:07:35.067 8670.917 - 8721.329: 82.2840% ( 36) 00:07:35.067 8721.329 - 8771.742: 82.5540% ( 47) 00:07:35.067 8771.742 - 8822.154: 82.7665% ( 37) 00:07:35.067 8822.154 - 8872.566: 83.0538% ( 50) 00:07:35.067 8872.566 - 8922.978: 83.3295% ( 48) 00:07:35.068 8922.978 - 8973.391: 83.6282% ( 52) 00:07:35.068 8973.391 - 9023.803: 83.9557% ( 57) 00:07:35.068 9023.803 - 9074.215: 84.3635% ( 71) 00:07:35.068 9074.215 - 9124.628: 84.7139% ( 61) 00:07:35.068 9124.628 - 9175.040: 85.0816% ( 64) 00:07:35.068 9175.040 - 9225.452: 85.5124% ( 75) 00:07:35.068 9225.452 - 9275.865: 86.0869% ( 100) 00:07:35.068 9275.865 - 9326.277: 86.6096% ( 91) 00:07:35.068 9326.277 - 9376.689: 87.1094% ( 87) 00:07:35.068 9376.689 - 9427.102: 87.5632% ( 79) 00:07:35.068 9427.102 - 9477.514: 87.9825% ( 73) 00:07:35.068 9477.514 - 9527.926: 88.3904% ( 71) 00:07:35.068 9527.926 - 9578.338: 88.6661% ( 48) 00:07:35.068 9578.338 - 9628.751: 88.9304% ( 46) 00:07:35.068 9628.751 - 9679.163: 89.1429% ( 37) 00:07:35.068 9679.163 - 9729.575: 89.3957% ( 44) 00:07:35.068 9729.575 - 9779.988: 89.6369% ( 42) 00:07:35.068 9779.988 - 9830.400: 89.9127% ( 48) 00:07:35.068 9830.400 - 9880.812: 90.0850% ( 30) 00:07:35.068 9880.812 - 9931.225: 90.2344% ( 26) 00:07:35.068 9931.225 - 9981.637: 90.3608% ( 22) 00:07:35.068 9981.637 - 10032.049: 90.4814% ( 21) 00:07:35.068 10032.049 - 10082.462: 90.6595% ( 31) 00:07:35.068 10082.462 - 10132.874: 90.7514% ( 16) 00:07:35.068 10132.874 - 10183.286: 90.9180% ( 29) 00:07:35.068 10183.286 - 10233.698: 91.3201% ( 70) 00:07:35.068 10233.698 - 10284.111: 91.4924% ( 30) 00:07:35.068 10284.111 - 10334.523: 91.6360% ( 25) 00:07:35.068 10334.523 - 10384.935: 91.7567% ( 21) 00:07:35.068 10384.935 - 10435.348: 91.9118% ( 27) 00:07:35.068 10435.348 - 10485.760: 92.0324% ( 21) 00:07:35.068 10485.760 - 10536.172: 92.1588% ( 22) 00:07:35.068 10536.172 - 10586.585: 92.2737% ( 20) 00:07:35.068 10586.585 - 10636.997: 92.4000% ( 22) 00:07:35.068 10636.997 - 10687.409: 92.5092% ( 19) 00:07:35.068 10687.409 - 10737.822: 92.5781% ( 12) 00:07:35.068 10737.822 - 10788.234: 92.6528% ( 13) 00:07:35.068 10788.234 - 10838.646: 92.7505% ( 17) 00:07:35.068 10838.646 - 10889.058: 92.9056% ( 27) 00:07:35.068 10889.058 - 10939.471: 93.1009% ( 34) 00:07:35.068 10939.471 - 10989.883: 93.2273% ( 22) 00:07:35.068 10989.883 - 11040.295: 93.3307% ( 18) 00:07:35.068 11040.295 - 11090.708: 93.4398% ( 19) 00:07:35.068 11090.708 - 11141.120: 93.5547% ( 20) 00:07:35.068 11141.120 - 11191.532: 93.6466% ( 16) 00:07:35.068 11191.532 - 11241.945: 93.8764% ( 40) 00:07:35.068 11241.945 - 11292.357: 93.9798% ( 18) 00:07:35.068 11292.357 - 11342.769: 94.0430% ( 11) 00:07:35.068 11342.769 - 11393.182: 94.1464% ( 18) 00:07:35.068 11393.182 - 11443.594: 94.2325% ( 15) 00:07:35.068 11443.594 - 11494.006: 94.3072% ( 13) 00:07:35.068 11494.006 - 11544.418: 94.3704% ( 11) 00:07:35.068 11544.418 - 11594.831: 94.4508% ( 14) 00:07:35.068 11594.831 - 11645.243: 94.5255% ( 13) 00:07:35.068 11645.243 - 11695.655: 94.6691% ( 25) 00:07:35.068 11695.655 - 11746.068: 94.8127% ( 25) 00:07:35.068 11746.068 - 11796.480: 94.8702% ( 10) 00:07:35.068 11796.480 - 11846.892: 94.9219% ( 9) 00:07:35.068 11846.892 - 11897.305: 94.9793% ( 10) 00:07:35.068 11897.305 - 11947.717: 95.0253% ( 8) 00:07:35.068 11947.717 - 11998.129: 95.0425% ( 3) 00:07:35.068 11998.129 - 12048.542: 95.0597% ( 3) 00:07:35.068 12048.542 - 12098.954: 95.0827% ( 4) 00:07:35.068 12098.954 - 12149.366: 95.1000% ( 3) 00:07:35.068 12149.366 - 12199.778: 95.1172% ( 3) 00:07:35.068 12199.778 - 12250.191: 95.1402% ( 4) 00:07:35.068 12250.191 - 12300.603: 95.1746% ( 6) 00:07:35.068 12300.603 - 12351.015: 95.2436% ( 12) 00:07:35.068 12351.015 - 12401.428: 95.2838% ( 7) 00:07:35.068 12401.428 - 12451.840: 95.3297% ( 8) 00:07:35.068 12451.840 - 12502.252: 95.3470% ( 3) 00:07:35.068 12502.252 - 12552.665: 95.3585% ( 2) 00:07:35.068 12552.665 - 12603.077: 95.3929% ( 6) 00:07:35.068 12603.077 - 12653.489: 95.4331% ( 7) 00:07:35.068 12653.489 - 12703.902: 95.4906% ( 10) 00:07:35.068 12703.902 - 12754.314: 95.5423% ( 9) 00:07:35.068 12754.314 - 12804.726: 95.5940% ( 9) 00:07:35.068 12804.726 - 12855.138: 95.6572% ( 11) 00:07:35.068 12855.138 - 12905.551: 95.7491% ( 16) 00:07:35.068 12905.551 - 13006.375: 95.9157% ( 29) 00:07:35.068 13006.375 - 13107.200: 96.0823% ( 29) 00:07:35.068 13107.200 - 13208.025: 96.3695% ( 50) 00:07:35.068 13208.025 - 13308.849: 96.6739% ( 53) 00:07:35.068 13308.849 - 13409.674: 97.1048% ( 75) 00:07:35.068 13409.674 - 13510.498: 97.3346% ( 40) 00:07:35.068 13510.498 - 13611.323: 97.8688% ( 93) 00:07:35.068 13611.323 - 13712.148: 98.0469% ( 31) 00:07:35.068 13712.148 - 13812.972: 98.1790% ( 23) 00:07:35.068 13812.972 - 13913.797: 98.3226% ( 25) 00:07:35.068 13913.797 - 14014.622: 98.4318% ( 19) 00:07:35.068 14014.622 - 14115.446: 98.5064% ( 13) 00:07:35.068 14115.446 - 14216.271: 98.5754% ( 12) 00:07:35.068 14216.271 - 14317.095: 98.6098% ( 6) 00:07:35.068 14317.095 - 14417.920: 98.6386% ( 5) 00:07:35.068 14417.920 - 14518.745: 98.6673% ( 5) 00:07:35.068 14518.745 - 14619.569: 98.6903% ( 4) 00:07:35.068 14619.569 - 14720.394: 98.7362% ( 8) 00:07:35.068 14720.394 - 14821.218: 98.7764% ( 7) 00:07:35.068 14821.218 - 14922.043: 98.8971% ( 21) 00:07:35.068 15224.517 - 15325.342: 98.9085% ( 2) 00:07:35.068 15325.342 - 15426.166: 98.9315% ( 4) 00:07:35.068 15426.166 - 15526.991: 99.0636% ( 23) 00:07:35.068 15526.991 - 15627.815: 99.2245% ( 28) 00:07:35.068 15627.815 - 15728.640: 99.2647% ( 7) 00:07:35.068 22988.012 - 23088.837: 99.2819% ( 3) 00:07:35.068 23088.837 - 23189.662: 99.3049% ( 4) 00:07:35.068 23189.662 - 23290.486: 99.3222% ( 3) 00:07:35.068 23290.486 - 23391.311: 99.3394% ( 3) 00:07:35.068 23391.311 - 23492.135: 99.3566% ( 3) 00:07:35.068 23492.135 - 23592.960: 99.3739% ( 3) 00:07:35.068 23592.960 - 23693.785: 99.3911% ( 3) 00:07:35.068 23693.785 - 23794.609: 99.4026% ( 2) 00:07:35.068 23794.609 - 23895.434: 99.4198% ( 3) 00:07:35.068 23895.434 - 23996.258: 99.4370% ( 3) 00:07:35.068 23996.258 - 24097.083: 99.4543% ( 3) 00:07:35.068 24097.083 - 24197.908: 99.4773% ( 4) 00:07:35.068 24197.908 - 24298.732: 99.4945% ( 3) 00:07:35.068 24298.732 - 24399.557: 99.5117% ( 3) 00:07:35.068 24399.557 - 24500.382: 99.5290% ( 3) 00:07:35.068 24500.382 - 24601.206: 99.5462% ( 3) 00:07:35.068 24601.206 - 24702.031: 99.5634% ( 3) 00:07:35.068 24702.031 - 24802.855: 99.5864% ( 4) 00:07:35.068 24802.855 - 24903.680: 99.6036% ( 3) 00:07:35.068 24903.680 - 25004.505: 99.6151% ( 2) 00:07:35.068 25004.505 - 25105.329: 99.6324% ( 3) 00:07:35.068 28029.243 - 28230.892: 99.6611% ( 5) 00:07:35.068 28230.892 - 28432.542: 99.7013% ( 7) 00:07:35.068 28432.542 - 28634.191: 99.7300% ( 5) 00:07:35.068 28634.191 - 28835.840: 99.7702% ( 7) 00:07:35.068 28835.840 - 29037.489: 99.8047% ( 6) 00:07:35.068 29037.489 - 29239.138: 99.8392% ( 6) 00:07:35.068 29239.138 - 29440.788: 99.8736% ( 6) 00:07:35.068 29440.788 - 29642.437: 99.9081% ( 6) 00:07:35.068 29642.437 - 29844.086: 99.9483% ( 7) 00:07:35.068 29844.086 - 30045.735: 99.9828% ( 6) 00:07:35.068 30045.735 - 30247.385: 100.0000% ( 3) 00:07:35.068 00:07:35.068 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:35.068 ============================================================================== 00:07:35.068 Range in us Cumulative IO count 00:07:35.068 5318.498 - 5343.705: 0.0057% ( 1) 00:07:35.068 5368.911 - 5394.117: 0.0172% ( 2) 00:07:35.068 5394.117 - 5419.323: 0.0287% ( 2) 00:07:35.068 5444.529 - 5469.735: 0.0402% ( 2) 00:07:35.068 5469.735 - 5494.942: 0.0632% ( 4) 00:07:35.068 5494.942 - 5520.148: 0.0862% ( 4) 00:07:35.068 5520.148 - 5545.354: 0.1091% ( 4) 00:07:35.068 5545.354 - 5570.560: 0.1608% ( 9) 00:07:35.068 5570.560 - 5595.766: 0.2413% ( 14) 00:07:35.068 5595.766 - 5620.972: 0.4308% ( 33) 00:07:35.068 5620.972 - 5646.178: 0.7181% ( 50) 00:07:35.068 5646.178 - 5671.385: 1.1661% ( 78) 00:07:35.068 5671.385 - 5696.591: 1.9014% ( 128) 00:07:35.068 5696.591 - 5721.797: 2.4759% ( 100) 00:07:35.068 5721.797 - 5747.003: 3.6477% ( 204) 00:07:35.068 5747.003 - 5772.209: 5.2505% ( 279) 00:07:35.068 5772.209 - 5797.415: 6.3419% ( 190) 00:07:35.068 5797.415 - 5822.622: 7.6402% ( 226) 00:07:35.068 5822.622 - 5847.828: 8.6512% ( 176) 00:07:35.068 5847.828 - 5873.034: 9.7082% ( 184) 00:07:35.068 5873.034 - 5898.240: 10.9547% ( 217) 00:07:35.068 5898.240 - 5923.446: 11.8739% ( 160) 00:07:35.068 5923.446 - 5948.652: 12.6149% ( 129) 00:07:35.068 5948.652 - 5973.858: 13.4249% ( 141) 00:07:35.068 5973.858 - 5999.065: 14.9242% ( 261) 00:07:35.068 5999.065 - 6024.271: 16.2569% ( 232) 00:07:35.068 6024.271 - 6049.477: 17.9917% ( 302) 00:07:35.068 6049.477 - 6074.683: 19.5772% ( 276) 00:07:35.068 6074.683 - 6099.889: 20.9386% ( 237) 00:07:35.068 6099.889 - 6125.095: 22.9148% ( 344) 00:07:35.068 6125.095 - 6150.302: 24.6841% ( 308) 00:07:35.068 6150.302 - 6175.508: 26.3902% ( 297) 00:07:35.068 6175.508 - 6200.714: 28.1997% ( 315) 00:07:35.068 6200.714 - 6225.920: 29.8426% ( 286) 00:07:35.068 6225.920 - 6251.126: 31.7670% ( 335) 00:07:35.069 6251.126 - 6276.332: 33.4961% ( 301) 00:07:35.069 6276.332 - 6301.538: 35.0414% ( 269) 00:07:35.069 6301.538 - 6326.745: 37.0117% ( 343) 00:07:35.069 6326.745 - 6351.951: 39.0051% ( 347) 00:07:35.069 6351.951 - 6377.157: 41.1535% ( 374) 00:07:35.069 6377.157 - 6402.363: 43.5662% ( 420) 00:07:35.069 6402.363 - 6427.569: 45.7491% ( 380) 00:07:35.069 6427.569 - 6452.775: 47.6045% ( 323) 00:07:35.069 6452.775 - 6503.188: 51.3040% ( 644) 00:07:35.069 6503.188 - 6553.600: 55.1126% ( 663) 00:07:35.069 6553.600 - 6604.012: 58.1112% ( 522) 00:07:35.069 6604.012 - 6654.425: 60.6790% ( 447) 00:07:35.069 6654.425 - 6704.837: 62.7987% ( 369) 00:07:35.069 6704.837 - 6755.249: 64.9759% ( 379) 00:07:35.069 6755.249 - 6805.662: 66.5728% ( 278) 00:07:35.069 6805.662 - 6856.074: 68.1985% ( 283) 00:07:35.069 6856.074 - 6906.486: 69.2153% ( 177) 00:07:35.069 6906.486 - 6956.898: 70.1919% ( 170) 00:07:35.069 6956.898 - 7007.311: 71.4212% ( 214) 00:07:35.069 7007.311 - 7057.723: 72.1507% ( 127) 00:07:35.069 7057.723 - 7108.135: 72.8631% ( 124) 00:07:35.069 7108.135 - 7158.548: 73.6558% ( 138) 00:07:35.069 7158.548 - 7208.960: 74.5002% ( 147) 00:07:35.069 7208.960 - 7259.372: 75.0517% ( 96) 00:07:35.069 7259.372 - 7309.785: 75.4538% ( 70) 00:07:35.069 7309.785 - 7360.197: 75.7583% ( 53) 00:07:35.069 7360.197 - 7410.609: 76.1489% ( 68) 00:07:35.069 7410.609 - 7461.022: 76.4936% ( 60) 00:07:35.069 7461.022 - 7511.434: 76.9531% ( 80) 00:07:35.069 7511.434 - 7561.846: 77.4586% ( 88) 00:07:35.069 7561.846 - 7612.258: 78.0388% ( 101) 00:07:35.069 7612.258 - 7662.671: 78.3375% ( 52) 00:07:35.069 7662.671 - 7713.083: 78.6248% ( 50) 00:07:35.069 7713.083 - 7763.495: 78.9120% ( 50) 00:07:35.069 7763.495 - 7813.908: 79.2796% ( 64) 00:07:35.069 7813.908 - 7864.320: 79.5841% ( 53) 00:07:35.069 7864.320 - 7914.732: 79.7564% ( 30) 00:07:35.069 7914.732 - 7965.145: 79.9345% ( 31) 00:07:35.069 7965.145 - 8015.557: 80.1298% ( 34) 00:07:35.069 8015.557 - 8065.969: 80.3251% ( 34) 00:07:35.069 8065.969 - 8116.382: 80.5492% ( 39) 00:07:35.069 8116.382 - 8166.794: 80.6526% ( 18) 00:07:35.069 8166.794 - 8217.206: 80.7215% ( 12) 00:07:35.069 8217.206 - 8267.618: 80.7962% ( 13) 00:07:35.069 8267.618 - 8318.031: 80.8996% ( 18) 00:07:35.069 8318.031 - 8368.443: 81.0145% ( 20) 00:07:35.069 8368.443 - 8418.855: 81.1409% ( 22) 00:07:35.069 8418.855 - 8469.268: 81.2787% ( 24) 00:07:35.069 8469.268 - 8519.680: 81.3879% ( 19) 00:07:35.069 8519.680 - 8570.092: 81.4683% ( 14) 00:07:35.069 8570.092 - 8620.505: 81.5372% ( 12) 00:07:35.069 8620.505 - 8670.917: 81.6119% ( 13) 00:07:35.069 8670.917 - 8721.329: 81.7268% ( 20) 00:07:35.069 8721.329 - 8771.742: 81.8589% ( 23) 00:07:35.069 8771.742 - 8822.154: 82.0542% ( 34) 00:07:35.069 8822.154 - 8872.566: 82.3415% ( 50) 00:07:35.069 8872.566 - 8922.978: 82.6229% ( 49) 00:07:35.069 8922.978 - 8973.391: 82.9561% ( 58) 00:07:35.069 8973.391 - 9023.803: 83.3467% ( 68) 00:07:35.069 9023.803 - 9074.215: 83.7259% ( 66) 00:07:35.069 9074.215 - 9124.628: 84.2486% ( 91) 00:07:35.069 9124.628 - 9175.040: 84.6909% ( 77) 00:07:35.069 9175.040 - 9225.452: 85.1275% ( 76) 00:07:35.069 9225.452 - 9275.865: 85.6503% ( 91) 00:07:35.069 9275.865 - 9326.277: 86.1328% ( 84) 00:07:35.069 9326.277 - 9376.689: 86.6383% ( 88) 00:07:35.069 9376.689 - 9427.102: 87.2243% ( 102) 00:07:35.069 9427.102 - 9477.514: 87.8102% ( 102) 00:07:35.069 9477.514 - 9527.926: 88.3674% ( 97) 00:07:35.069 9527.926 - 9578.338: 88.9189% ( 96) 00:07:35.069 9578.338 - 9628.751: 89.3899% ( 82) 00:07:35.069 9628.751 - 9679.163: 89.7461% ( 62) 00:07:35.069 9679.163 - 9729.575: 90.0276% ( 49) 00:07:35.069 9729.575 - 9779.988: 90.3608% ( 58) 00:07:35.069 9779.988 - 9830.400: 90.6193% ( 45) 00:07:35.069 9830.400 - 9880.812: 90.7571% ( 24) 00:07:35.069 9880.812 - 9931.225: 90.9180% ( 28) 00:07:35.069 9931.225 - 9981.637: 91.0731% ( 27) 00:07:35.069 9981.637 - 10032.049: 91.2626% ( 33) 00:07:35.069 10032.049 - 10082.462: 91.4235% ( 28) 00:07:35.069 10082.462 - 10132.874: 91.5499% ( 22) 00:07:35.069 10132.874 - 10183.286: 91.6073% ( 10) 00:07:35.069 10183.286 - 10233.698: 91.6705% ( 11) 00:07:35.069 10233.698 - 10284.111: 91.7452% ( 13) 00:07:35.069 10284.111 - 10334.523: 91.8199% ( 13) 00:07:35.069 10334.523 - 10384.935: 91.8830% ( 11) 00:07:35.069 10384.935 - 10435.348: 91.9462% ( 11) 00:07:35.069 10435.348 - 10485.760: 92.0669% ( 21) 00:07:35.069 10485.760 - 10536.172: 92.1473% ( 14) 00:07:35.069 10536.172 - 10586.585: 92.2105% ( 11) 00:07:35.069 10586.585 - 10636.997: 92.2622% ( 9) 00:07:35.069 10636.997 - 10687.409: 92.3426% ( 14) 00:07:35.069 10687.409 - 10737.822: 92.4173% ( 13) 00:07:35.069 10737.822 - 10788.234: 92.5092% ( 16) 00:07:35.069 10788.234 - 10838.646: 92.5954% ( 15) 00:07:35.069 10838.646 - 10889.058: 92.6930% ( 17) 00:07:35.069 10889.058 - 10939.471: 92.8826% ( 33) 00:07:35.069 10939.471 - 10989.883: 93.0664% ( 32) 00:07:35.069 10989.883 - 11040.295: 93.1756% ( 19) 00:07:35.069 11040.295 - 11090.708: 93.3019% ( 22) 00:07:35.069 11090.708 - 11141.120: 93.5030% ( 35) 00:07:35.069 11141.120 - 11191.532: 93.6409% ( 24) 00:07:35.069 11191.532 - 11241.945: 93.7328% ( 16) 00:07:35.069 11241.945 - 11292.357: 93.9855% ( 44) 00:07:35.069 11292.357 - 11342.769: 94.0947% ( 19) 00:07:35.069 11342.769 - 11393.182: 94.1808% ( 15) 00:07:35.069 11393.182 - 11443.594: 94.2842% ( 18) 00:07:35.069 11443.594 - 11494.006: 94.4278% ( 25) 00:07:35.069 11494.006 - 11544.418: 94.5887% ( 28) 00:07:35.069 11544.418 - 11594.831: 94.6576% ( 12) 00:07:35.069 11594.831 - 11645.243: 94.6978% ( 7) 00:07:35.069 11645.243 - 11695.655: 94.7266% ( 5) 00:07:35.069 11695.655 - 11746.068: 94.7495% ( 4) 00:07:35.069 11746.068 - 11796.480: 94.7840% ( 6) 00:07:35.069 11796.480 - 11846.892: 94.8127% ( 5) 00:07:35.069 11846.892 - 11897.305: 94.8529% ( 7) 00:07:35.069 11897.305 - 11947.717: 94.8759% ( 4) 00:07:35.069 11947.717 - 11998.129: 94.9391% ( 11) 00:07:35.069 11998.129 - 12048.542: 95.0138% ( 13) 00:07:35.069 12048.542 - 12098.954: 95.0712% ( 10) 00:07:35.069 12098.954 - 12149.366: 95.0885% ( 3) 00:07:35.069 12149.366 - 12199.778: 95.1344% ( 8) 00:07:35.069 12199.778 - 12250.191: 95.1919% ( 10) 00:07:35.069 12250.191 - 12300.603: 95.2723% ( 14) 00:07:35.069 12300.603 - 12351.015: 95.3240% ( 9) 00:07:35.069 12351.015 - 12401.428: 95.3699% ( 8) 00:07:35.069 12401.428 - 12451.840: 95.4216% ( 9) 00:07:35.069 12451.840 - 12502.252: 95.4676% ( 8) 00:07:35.069 12502.252 - 12552.665: 95.5193% ( 9) 00:07:35.069 12552.665 - 12603.077: 95.5710% ( 9) 00:07:35.069 12603.077 - 12653.489: 95.6227% ( 9) 00:07:35.069 12653.489 - 12703.902: 95.7204% ( 17) 00:07:35.069 12703.902 - 12754.314: 95.7893% ( 12) 00:07:35.069 12754.314 - 12804.726: 95.8697% ( 14) 00:07:35.069 12804.726 - 12855.138: 95.9559% ( 15) 00:07:35.069 12855.138 - 12905.551: 96.0535% ( 17) 00:07:35.069 12905.551 - 13006.375: 96.3293% ( 48) 00:07:35.069 13006.375 - 13107.200: 96.6165% ( 50) 00:07:35.069 13107.200 - 13208.025: 96.7486% ( 23) 00:07:35.069 13208.025 - 13308.849: 96.8635% ( 20) 00:07:35.069 13308.849 - 13409.674: 96.9554% ( 16) 00:07:35.069 13409.674 - 13510.498: 97.1335% ( 31) 00:07:35.069 13510.498 - 13611.323: 97.3748% ( 42) 00:07:35.069 13611.323 - 13712.148: 97.5873% ( 37) 00:07:35.069 13712.148 - 13812.972: 97.7309% ( 25) 00:07:35.069 13812.972 - 13913.797: 97.8401% ( 19) 00:07:35.069 13913.797 - 14014.622: 98.0928% ( 44) 00:07:35.069 14014.622 - 14115.446: 98.1905% ( 17) 00:07:35.069 14115.446 - 14216.271: 98.2307% ( 7) 00:07:35.069 14216.271 - 14317.095: 98.2709% ( 7) 00:07:35.069 14317.095 - 14417.920: 98.3169% ( 8) 00:07:35.069 14417.920 - 14518.745: 98.4375% ( 21) 00:07:35.069 14518.745 - 14619.569: 98.7132% ( 48) 00:07:35.069 14619.569 - 14720.394: 98.8109% ( 17) 00:07:35.069 14720.394 - 14821.218: 98.8339% ( 4) 00:07:35.069 14821.218 - 14922.043: 98.8626% ( 5) 00:07:35.069 14922.043 - 15022.868: 98.9430% ( 14) 00:07:35.069 15022.868 - 15123.692: 98.9890% ( 8) 00:07:35.069 15123.692 - 15224.517: 99.0292% ( 7) 00:07:35.069 15224.517 - 15325.342: 99.0809% ( 9) 00:07:35.069 15325.342 - 15426.166: 99.1556% ( 13) 00:07:35.069 15426.166 - 15526.991: 99.1843% ( 5) 00:07:35.069 15526.991 - 15627.815: 99.1958% ( 2) 00:07:35.069 15627.815 - 15728.640: 99.2188% ( 4) 00:07:35.069 15728.640 - 15829.465: 99.2360% ( 3) 00:07:35.069 15829.465 - 15930.289: 99.2590% ( 4) 00:07:35.069 15930.289 - 16031.114: 99.2647% ( 1) 00:07:35.069 20467.397 - 20568.222: 99.2762% ( 2) 00:07:35.069 20568.222 - 20669.046: 99.2934% ( 3) 00:07:35.069 20669.046 - 20769.871: 99.3107% ( 3) 00:07:35.069 20769.871 - 20870.695: 99.3279% ( 3) 00:07:35.069 20870.695 - 20971.520: 99.3451% ( 3) 00:07:35.069 20971.520 - 21072.345: 99.3624% ( 3) 00:07:35.070 21072.345 - 21173.169: 99.3796% ( 3) 00:07:35.070 21173.169 - 21273.994: 99.3968% ( 3) 00:07:35.070 21273.994 - 21374.818: 99.4141% ( 3) 00:07:35.070 21374.818 - 21475.643: 99.4313% ( 3) 00:07:35.070 21475.643 - 21576.468: 99.4485% ( 3) 00:07:35.070 21576.468 - 21677.292: 99.4658% ( 3) 00:07:35.070 21677.292 - 21778.117: 99.4830% ( 3) 00:07:35.070 21778.117 - 21878.942: 99.5002% ( 3) 00:07:35.070 21878.942 - 21979.766: 99.5175% ( 3) 00:07:35.070 21979.766 - 22080.591: 99.5347% ( 3) 00:07:35.070 22080.591 - 22181.415: 99.5519% ( 3) 00:07:35.070 22181.415 - 22282.240: 99.5749% ( 4) 00:07:35.070 22282.240 - 22383.065: 99.5921% ( 3) 00:07:35.070 22383.065 - 22483.889: 99.6094% ( 3) 00:07:35.070 22483.889 - 22584.714: 99.6209% ( 2) 00:07:35.070 22584.714 - 22685.538: 99.6324% ( 2) 00:07:35.070 25105.329 - 25206.154: 99.6496% ( 3) 00:07:35.070 25206.154 - 25306.978: 99.6726% ( 4) 00:07:35.070 25306.978 - 25407.803: 99.6841% ( 2) 00:07:35.070 25407.803 - 25508.628: 99.7070% ( 4) 00:07:35.070 25508.628 - 25609.452: 99.7185% ( 2) 00:07:35.070 25609.452 - 25710.277: 99.7300% ( 2) 00:07:35.070 25710.277 - 25811.102: 99.7472% ( 3) 00:07:35.070 25811.102 - 26012.751: 99.7817% ( 6) 00:07:35.070 26012.751 - 26214.400: 99.8162% ( 6) 00:07:35.070 26214.400 - 26416.049: 99.8506% ( 6) 00:07:35.070 26416.049 - 26617.698: 99.8909% ( 7) 00:07:35.070 26617.698 - 26819.348: 99.9253% ( 6) 00:07:35.070 26819.348 - 27020.997: 99.9598% ( 6) 00:07:35.070 27020.997 - 27222.646: 99.9885% ( 5) 00:07:35.070 27222.646 - 27424.295: 100.0000% ( 2) 00:07:35.070 00:07:35.070 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:35.070 ============================================================================== 00:07:35.070 Range in us Cumulative IO count 00:07:35.070 5343.705 - 5368.911: 0.0115% ( 2) 00:07:35.070 5368.911 - 5394.117: 0.0287% ( 3) 00:07:35.070 5394.117 - 5419.323: 0.0345% ( 1) 00:07:35.070 5419.323 - 5444.529: 0.0517% ( 3) 00:07:35.070 5444.529 - 5469.735: 0.0632% ( 2) 00:07:35.070 5469.735 - 5494.942: 0.0747% ( 2) 00:07:35.070 5494.942 - 5520.148: 0.0977% ( 4) 00:07:35.070 5520.148 - 5545.354: 0.1321% ( 6) 00:07:35.070 5545.354 - 5570.560: 0.1551% ( 4) 00:07:35.070 5570.560 - 5595.766: 0.2298% ( 13) 00:07:35.070 5595.766 - 5620.972: 0.3619% ( 23) 00:07:35.070 5620.972 - 5646.178: 0.6319% ( 47) 00:07:35.070 5646.178 - 5671.385: 1.0340% ( 70) 00:07:35.070 5671.385 - 5696.591: 1.5682% ( 93) 00:07:35.070 5696.591 - 5721.797: 2.1369% ( 99) 00:07:35.070 5721.797 - 5747.003: 2.9354% ( 139) 00:07:35.070 5747.003 - 5772.209: 4.4233% ( 259) 00:07:35.070 5772.209 - 5797.415: 5.7100% ( 224) 00:07:35.070 5797.415 - 5822.622: 6.7210% ( 176) 00:07:35.070 5822.622 - 5847.828: 7.6746% ( 166) 00:07:35.070 5847.828 - 5873.034: 8.6340% ( 167) 00:07:35.070 5873.034 - 5898.240: 9.8977% ( 220) 00:07:35.070 5898.240 - 5923.446: 10.8915% ( 173) 00:07:35.070 5923.446 - 5948.652: 12.0749% ( 206) 00:07:35.070 5948.652 - 5973.858: 13.3215% ( 217) 00:07:35.070 5973.858 - 5999.065: 15.0103% ( 294) 00:07:35.070 5999.065 - 6024.271: 16.7222% ( 298) 00:07:35.070 6024.271 - 6049.477: 18.1928% ( 256) 00:07:35.070 6049.477 - 6074.683: 19.7495% ( 271) 00:07:35.070 6074.683 - 6099.889: 21.3982% ( 287) 00:07:35.070 6099.889 - 6125.095: 22.9894% ( 277) 00:07:35.070 6125.095 - 6150.302: 24.7415% ( 305) 00:07:35.070 6150.302 - 6175.508: 26.6716% ( 336) 00:07:35.070 6175.508 - 6200.714: 28.5041% ( 319) 00:07:35.070 6200.714 - 6225.920: 30.2390% ( 302) 00:07:35.070 6225.920 - 6251.126: 31.8359% ( 278) 00:07:35.070 6251.126 - 6276.332: 33.9901% ( 375) 00:07:35.070 6276.332 - 6301.538: 35.8571% ( 325) 00:07:35.070 6301.538 - 6326.745: 37.4943% ( 285) 00:07:35.070 6326.745 - 6351.951: 39.3727% ( 327) 00:07:35.070 6351.951 - 6377.157: 41.0788% ( 297) 00:07:35.070 6377.157 - 6402.363: 43.1526% ( 361) 00:07:35.070 6402.363 - 6427.569: 45.1689% ( 351) 00:07:35.070 6427.569 - 6452.775: 47.1852% ( 351) 00:07:35.070 6452.775 - 6503.188: 51.6085% ( 770) 00:07:35.070 6503.188 - 6553.600: 55.2677% ( 637) 00:07:35.070 6553.600 - 6604.012: 58.0308% ( 481) 00:07:35.070 6604.012 - 6654.425: 60.5641% ( 441) 00:07:35.070 6654.425 - 6704.837: 62.7068% ( 373) 00:07:35.070 6704.837 - 6755.249: 64.4991% ( 312) 00:07:35.070 6755.249 - 6805.662: 66.3258% ( 318) 00:07:35.070 6805.662 - 6856.074: 67.6356% ( 228) 00:07:35.070 6856.074 - 6906.486: 69.0889% ( 253) 00:07:35.070 6906.486 - 6956.898: 70.0310% ( 164) 00:07:35.070 6956.898 - 7007.311: 70.8697% ( 146) 00:07:35.070 7007.311 - 7057.723: 71.6854% ( 142) 00:07:35.070 7057.723 - 7108.135: 72.4207% ( 128) 00:07:35.070 7108.135 - 7158.548: 73.1273% ( 123) 00:07:35.070 7158.548 - 7208.960: 73.9602% ( 145) 00:07:35.070 7208.960 - 7259.372: 74.4485% ( 85) 00:07:35.070 7259.372 - 7309.785: 74.9828% ( 93) 00:07:35.070 7309.785 - 7360.197: 75.4481% ( 81) 00:07:35.070 7360.197 - 7410.609: 75.9708% ( 91) 00:07:35.070 7410.609 - 7461.022: 76.3500% ( 66) 00:07:35.070 7461.022 - 7511.434: 76.7348% ( 67) 00:07:35.070 7511.434 - 7561.846: 77.0048% ( 47) 00:07:35.070 7561.846 - 7612.258: 77.4242% ( 73) 00:07:35.070 7612.258 - 7662.671: 77.7057% ( 49) 00:07:35.070 7662.671 - 7713.083: 77.9239% ( 38) 00:07:35.070 7713.083 - 7763.495: 78.2227% ( 52) 00:07:35.070 7763.495 - 7813.908: 78.3720% ( 26) 00:07:35.070 7813.908 - 7864.320: 78.5558% ( 32) 00:07:35.070 7864.320 - 7914.732: 78.7799% ( 39) 00:07:35.070 7914.732 - 7965.145: 79.0499% ( 47) 00:07:35.070 7965.145 - 8015.557: 79.3141% ( 46) 00:07:35.070 8015.557 - 8065.969: 79.7277% ( 72) 00:07:35.070 8065.969 - 8116.382: 80.0666% ( 59) 00:07:35.070 8116.382 - 8166.794: 80.2677% ( 35) 00:07:35.070 8166.794 - 8217.206: 80.4975% ( 40) 00:07:35.070 8217.206 - 8267.618: 80.7215% ( 39) 00:07:35.070 8267.618 - 8318.031: 80.9915% ( 47) 00:07:35.070 8318.031 - 8368.443: 81.2385% ( 43) 00:07:35.070 8368.443 - 8418.855: 81.4511% ( 37) 00:07:35.070 8418.855 - 8469.268: 81.6004% ( 26) 00:07:35.070 8469.268 - 8519.680: 81.7383% ( 24) 00:07:35.070 8519.680 - 8570.092: 81.8417% ( 18) 00:07:35.070 8570.092 - 8620.505: 81.9566% ( 20) 00:07:35.070 8620.505 - 8670.917: 82.0427% ( 15) 00:07:35.070 8670.917 - 8721.329: 82.1634% ( 21) 00:07:35.070 8721.329 - 8771.742: 82.2955% ( 23) 00:07:35.070 8771.742 - 8822.154: 82.4276% ( 23) 00:07:35.070 8822.154 - 8872.566: 82.5827% ( 27) 00:07:35.070 8872.566 - 8922.978: 82.8125% ( 40) 00:07:35.070 8922.978 - 8973.391: 83.0193% ( 36) 00:07:35.070 8973.391 - 9023.803: 83.3984% ( 66) 00:07:35.070 9023.803 - 9074.215: 83.7776% ( 66) 00:07:35.070 9074.215 - 9124.628: 84.2773% ( 87) 00:07:35.070 9124.628 - 9175.040: 84.7024% ( 74) 00:07:35.070 9175.040 - 9225.452: 85.2080% ( 88) 00:07:35.070 9225.452 - 9275.865: 85.7250% ( 90) 00:07:35.070 9275.865 - 9326.277: 86.1960% ( 82) 00:07:35.070 9326.277 - 9376.689: 86.7015% ( 88) 00:07:35.070 9376.689 - 9427.102: 87.1898% ( 85) 00:07:35.070 9427.102 - 9477.514: 87.7125% ( 91) 00:07:35.070 9477.514 - 9527.926: 88.2353% ( 91) 00:07:35.070 9527.926 - 9578.338: 88.8212% ( 102) 00:07:35.070 9578.338 - 9628.751: 89.2463% ( 74) 00:07:35.070 9628.751 - 9679.163: 89.5565% ( 54) 00:07:35.070 9679.163 - 9729.575: 89.8782% ( 56) 00:07:35.070 9729.575 - 9779.988: 90.1769% ( 52) 00:07:35.070 9779.988 - 9830.400: 90.5216% ( 60) 00:07:35.070 9830.400 - 9880.812: 90.7341% ( 37) 00:07:35.070 9880.812 - 9931.225: 90.9007% ( 29) 00:07:35.070 9931.225 - 9981.637: 91.0443% ( 25) 00:07:35.071 9981.637 - 10032.049: 91.2167% ( 30) 00:07:35.071 10032.049 - 10082.462: 91.4177% ( 35) 00:07:35.071 10082.462 - 10132.874: 91.5901% ( 30) 00:07:35.071 10132.874 - 10183.286: 91.6762% ( 15) 00:07:35.071 10183.286 - 10233.698: 91.7624% ( 15) 00:07:35.071 10233.698 - 10284.111: 91.8773% ( 20) 00:07:35.071 10284.111 - 10334.523: 92.0209% ( 25) 00:07:35.071 10334.523 - 10384.935: 92.1818% ( 28) 00:07:35.071 10384.935 - 10435.348: 92.2737% ( 16) 00:07:35.071 10435.348 - 10485.760: 92.3771% ( 18) 00:07:35.071 10485.760 - 10536.172: 92.4977% ( 21) 00:07:35.071 10536.172 - 10586.585: 92.6356% ( 24) 00:07:35.071 10586.585 - 10636.997: 92.7447% ( 19) 00:07:35.071 10636.997 - 10687.409: 92.8079% ( 11) 00:07:35.071 10687.409 - 10737.822: 92.8883% ( 14) 00:07:35.071 10737.822 - 10788.234: 92.9515% ( 11) 00:07:35.071 10788.234 - 10838.646: 93.0205% ( 12) 00:07:35.071 10838.646 - 10889.058: 93.1181% ( 17) 00:07:35.071 10889.058 - 10939.471: 93.1928% ( 13) 00:07:35.071 10939.471 - 10989.883: 93.3651% ( 30) 00:07:35.071 10989.883 - 11040.295: 93.4455% ( 14) 00:07:35.071 11040.295 - 11090.708: 93.5317% ( 15) 00:07:35.071 11090.708 - 11141.120: 93.6121% ( 14) 00:07:35.071 11141.120 - 11191.532: 93.6983% ( 15) 00:07:35.071 11191.532 - 11241.945: 93.7902% ( 16) 00:07:35.071 11241.945 - 11292.357: 93.9625% ( 30) 00:07:35.071 11292.357 - 11342.769: 94.0602% ( 17) 00:07:35.071 11342.769 - 11393.182: 94.1062% ( 8) 00:07:35.071 11393.182 - 11443.594: 94.1464% ( 7) 00:07:35.071 11443.594 - 11494.006: 94.1693% ( 4) 00:07:35.071 11494.006 - 11544.418: 94.1981% ( 5) 00:07:35.071 11544.418 - 11594.831: 94.2498% ( 9) 00:07:35.071 11594.831 - 11645.243: 94.3072% ( 10) 00:07:35.071 11645.243 - 11695.655: 94.4164% ( 19) 00:07:35.071 11695.655 - 11746.068: 94.4795% ( 11) 00:07:35.071 11746.068 - 11796.480: 94.5255% ( 8) 00:07:35.071 11796.480 - 11846.892: 94.5772% ( 9) 00:07:35.071 11846.892 - 11897.305: 94.7093% ( 23) 00:07:35.071 11897.305 - 11947.717: 94.8127% ( 18) 00:07:35.071 11947.717 - 11998.129: 94.8989% ( 15) 00:07:35.071 11998.129 - 12048.542: 95.0310% ( 23) 00:07:35.071 12048.542 - 12098.954: 95.2034% ( 30) 00:07:35.071 12098.954 - 12149.366: 95.2953% ( 16) 00:07:35.071 12149.366 - 12199.778: 95.3814% ( 15) 00:07:35.071 12199.778 - 12250.191: 95.4159% ( 6) 00:07:35.071 12250.191 - 12300.603: 95.4561% ( 7) 00:07:35.071 12300.603 - 12351.015: 95.4963% ( 7) 00:07:35.071 12351.015 - 12401.428: 95.5250% ( 5) 00:07:35.071 12401.428 - 12451.840: 95.5423% ( 3) 00:07:35.071 12451.840 - 12502.252: 95.5710% ( 5) 00:07:35.071 12502.252 - 12552.665: 95.5882% ( 3) 00:07:35.071 12552.665 - 12603.077: 95.6112% ( 4) 00:07:35.071 12603.077 - 12653.489: 95.6284% ( 3) 00:07:35.071 12653.489 - 12703.902: 95.6572% ( 5) 00:07:35.071 12703.902 - 12754.314: 95.7031% ( 8) 00:07:35.071 12754.314 - 12804.726: 95.7548% ( 9) 00:07:35.071 12804.726 - 12855.138: 95.7950% ( 7) 00:07:35.071 12855.138 - 12905.551: 95.8525% ( 10) 00:07:35.071 12905.551 - 13006.375: 96.0823% ( 40) 00:07:35.071 13006.375 - 13107.200: 96.4327% ( 61) 00:07:35.071 13107.200 - 13208.025: 96.6969% ( 46) 00:07:35.071 13208.025 - 13308.849: 96.8807% ( 32) 00:07:35.071 13308.849 - 13409.674: 97.0933% ( 37) 00:07:35.071 13409.674 - 13510.498: 97.2771% ( 32) 00:07:35.071 13510.498 - 13611.323: 97.4782% ( 35) 00:07:35.071 13611.323 - 13712.148: 97.6562% ( 31) 00:07:35.071 13712.148 - 13812.972: 97.9665% ( 54) 00:07:35.071 13812.972 - 13913.797: 98.0756% ( 19) 00:07:35.071 13913.797 - 14014.622: 98.1158% ( 7) 00:07:35.071 14014.622 - 14115.446: 98.1733% ( 10) 00:07:35.071 14115.446 - 14216.271: 98.2364% ( 11) 00:07:35.071 14216.271 - 14317.095: 98.3341% ( 17) 00:07:35.071 14317.095 - 14417.920: 98.5696% ( 41) 00:07:35.071 14417.920 - 14518.745: 98.6788% ( 19) 00:07:35.071 14518.745 - 14619.569: 98.7994% ( 21) 00:07:35.071 14619.569 - 14720.394: 98.9890% ( 33) 00:07:35.071 14720.394 - 14821.218: 99.1613% ( 30) 00:07:35.071 14821.218 - 14922.043: 99.2188% ( 10) 00:07:35.071 14922.043 - 15022.868: 99.2475% ( 5) 00:07:35.071 15022.868 - 15123.692: 99.2647% ( 3) 00:07:35.071 18551.729 - 18652.554: 99.2819% ( 3) 00:07:35.071 18652.554 - 18753.378: 99.3049% ( 4) 00:07:35.071 18753.378 - 18854.203: 99.3279% ( 4) 00:07:35.071 18854.203 - 18955.028: 99.3566% ( 5) 00:07:35.071 18955.028 - 19055.852: 99.3796% ( 4) 00:07:35.071 19055.852 - 19156.677: 99.4026% ( 4) 00:07:35.071 19156.677 - 19257.502: 99.4256% ( 4) 00:07:35.071 19257.502 - 19358.326: 99.4485% ( 4) 00:07:35.071 19358.326 - 19459.151: 99.4715% ( 4) 00:07:35.071 19459.151 - 19559.975: 99.4945% ( 4) 00:07:35.071 19559.975 - 19660.800: 99.5117% ( 3) 00:07:35.071 19660.800 - 19761.625: 99.5404% ( 5) 00:07:35.071 19761.625 - 19862.449: 99.5634% ( 4) 00:07:35.071 19862.449 - 19963.274: 99.5864% ( 4) 00:07:35.071 19963.274 - 20064.098: 99.6094% ( 4) 00:07:35.071 20064.098 - 20164.923: 99.6324% ( 4) 00:07:35.071 22584.714 - 22685.538: 99.6438% ( 2) 00:07:35.071 22685.538 - 22786.363: 99.6668% ( 4) 00:07:35.071 22786.363 - 22887.188: 99.6898% ( 4) 00:07:35.071 22887.188 - 22988.012: 99.7128% ( 4) 00:07:35.071 22988.012 - 23088.837: 99.7358% ( 4) 00:07:35.071 23088.837 - 23189.662: 99.7587% ( 4) 00:07:35.071 23189.662 - 23290.486: 99.7817% ( 4) 00:07:35.071 23290.486 - 23391.311: 99.8047% ( 4) 00:07:35.071 23391.311 - 23492.135: 99.8334% ( 5) 00:07:35.071 23492.135 - 23592.960: 99.8564% ( 4) 00:07:35.071 23592.960 - 23693.785: 99.8794% ( 4) 00:07:35.071 23693.785 - 23794.609: 99.9023% ( 4) 00:07:35.071 23794.609 - 23895.434: 99.9253% ( 4) 00:07:35.071 23895.434 - 23996.258: 99.9426% ( 3) 00:07:35.071 23996.258 - 24097.083: 99.9655% ( 4) 00:07:35.071 24097.083 - 24197.908: 99.9828% ( 3) 00:07:35.071 24197.908 - 24298.732: 100.0000% ( 3) 00:07:35.071 00:07:35.071 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:35.071 ============================================================================== 00:07:35.071 Range in us Cumulative IO count 00:07:35.071 5343.705 - 5368.911: 0.0057% ( 1) 00:07:35.071 5368.911 - 5394.117: 0.0230% ( 3) 00:07:35.071 5394.117 - 5419.323: 0.0345% ( 2) 00:07:35.071 5419.323 - 5444.529: 0.0460% ( 2) 00:07:35.071 5444.529 - 5469.735: 0.0517% ( 1) 00:07:35.071 5469.735 - 5494.942: 0.0574% ( 1) 00:07:35.071 5494.942 - 5520.148: 0.0747% ( 3) 00:07:35.071 5520.148 - 5545.354: 0.0862% ( 2) 00:07:35.071 5545.354 - 5570.560: 0.1264% ( 7) 00:07:35.071 5570.560 - 5595.766: 0.2068% ( 14) 00:07:35.071 5595.766 - 5620.972: 0.3274% ( 21) 00:07:35.071 5620.972 - 5646.178: 0.6664% ( 59) 00:07:35.071 5646.178 - 5671.385: 1.0225% ( 62) 00:07:35.071 5671.385 - 5696.591: 1.6314% ( 106) 00:07:35.071 5696.591 - 5721.797: 2.2920% ( 115) 00:07:35.071 5721.797 - 5747.003: 3.1250% ( 145) 00:07:35.071 5747.003 - 5772.209: 4.6186% ( 260) 00:07:35.071 5772.209 - 5797.415: 5.5262% ( 158) 00:07:35.071 5797.415 - 5822.622: 6.6579% ( 197) 00:07:35.071 5822.622 - 5847.828: 7.6114% ( 166) 00:07:35.071 5847.828 - 5873.034: 8.8580% ( 217) 00:07:35.071 5873.034 - 5898.240: 10.0758% ( 212) 00:07:35.071 5898.240 - 5923.446: 11.0983% ( 178) 00:07:35.071 5923.446 - 5948.652: 11.9543% ( 149) 00:07:35.071 5948.652 - 5973.858: 13.6317% ( 292) 00:07:35.071 5973.858 - 5999.065: 14.9644% ( 232) 00:07:35.071 5999.065 - 6024.271: 16.5211% ( 271) 00:07:35.071 6024.271 - 6049.477: 18.0032% ( 258) 00:07:35.071 6049.477 - 6074.683: 19.6634% ( 289) 00:07:35.071 6074.683 - 6099.889: 21.5016% ( 320) 00:07:35.071 6099.889 - 6125.095: 22.9837% ( 258) 00:07:35.071 6125.095 - 6150.302: 24.9540% ( 343) 00:07:35.071 6150.302 - 6175.508: 27.0623% ( 367) 00:07:35.071 6175.508 - 6200.714: 29.1360% ( 361) 00:07:35.071 6200.714 - 6225.920: 30.9628% ( 318) 00:07:35.071 6225.920 - 6251.126: 32.8412% ( 327) 00:07:35.071 6251.126 - 6276.332: 34.4095% ( 273) 00:07:35.071 6276.332 - 6301.538: 36.1098% ( 296) 00:07:35.071 6301.538 - 6326.745: 37.9021% ( 312) 00:07:35.071 6326.745 - 6351.951: 39.6829% ( 310) 00:07:35.071 6351.951 - 6377.157: 41.4062% ( 300) 00:07:35.071 6377.157 - 6402.363: 43.3709% ( 342) 00:07:35.071 6402.363 - 6427.569: 45.5365% ( 377) 00:07:35.071 6427.569 - 6452.775: 47.5643% ( 353) 00:07:35.071 6452.775 - 6503.188: 51.7693% ( 732) 00:07:35.071 6503.188 - 6553.600: 55.4688% ( 644) 00:07:35.071 6553.600 - 6604.012: 58.3352% ( 499) 00:07:35.071 6604.012 - 6654.425: 61.0524% ( 473) 00:07:35.071 6654.425 - 6704.837: 63.1836% ( 371) 00:07:35.071 6704.837 - 6755.249: 64.9874% ( 314) 00:07:35.071 6755.249 - 6805.662: 66.7222% ( 302) 00:07:35.071 6805.662 - 6856.074: 67.8998% ( 205) 00:07:35.071 6856.074 - 6906.486: 68.7155% ( 142) 00:07:35.071 6906.486 - 6956.898: 69.7208% ( 175) 00:07:35.071 6956.898 - 7007.311: 70.6284% ( 158) 00:07:35.072 7007.311 - 7057.723: 71.5935% ( 168) 00:07:35.072 7057.723 - 7108.135: 72.3805% ( 137) 00:07:35.072 7108.135 - 7158.548: 72.9952% ( 107) 00:07:35.072 7158.548 - 7208.960: 73.4720% ( 83) 00:07:35.072 7208.960 - 7259.372: 74.0177% ( 95) 00:07:35.072 7259.372 - 7309.785: 74.4945% ( 83) 00:07:35.072 7309.785 - 7360.197: 75.0057% ( 89) 00:07:35.072 7360.197 - 7410.609: 75.7870% ( 136) 00:07:35.072 7410.609 - 7461.022: 76.2810% ( 86) 00:07:35.072 7461.022 - 7511.434: 77.1140% ( 145) 00:07:35.072 7511.434 - 7561.846: 77.5046% ( 68) 00:07:35.072 7561.846 - 7612.258: 77.7976% ( 51) 00:07:35.072 7612.258 - 7662.671: 78.1078% ( 54) 00:07:35.072 7662.671 - 7713.083: 78.3433% ( 41) 00:07:35.072 7713.083 - 7763.495: 78.5386% ( 34) 00:07:35.072 7763.495 - 7813.908: 78.8431% ( 53) 00:07:35.072 7813.908 - 7864.320: 79.0958% ( 44) 00:07:35.072 7864.320 - 7914.732: 79.2165% ( 21) 00:07:35.072 7914.732 - 7965.145: 79.3313% ( 20) 00:07:35.072 7965.145 - 8015.557: 79.4520% ( 21) 00:07:35.072 8015.557 - 8065.969: 79.6186% ( 29) 00:07:35.072 8065.969 - 8116.382: 79.7679% ( 26) 00:07:35.072 8116.382 - 8166.794: 79.9747% ( 36) 00:07:35.072 8166.794 - 8217.206: 80.2849% ( 54) 00:07:35.072 8217.206 - 8267.618: 80.6066% ( 56) 00:07:35.072 8267.618 - 8318.031: 80.9858% ( 66) 00:07:35.072 8318.031 - 8368.443: 81.1753% ( 33) 00:07:35.072 8368.443 - 8418.855: 81.3189% ( 25) 00:07:35.072 8418.855 - 8469.268: 81.4798% ( 28) 00:07:35.072 8469.268 - 8519.680: 81.7325% ( 44) 00:07:35.072 8519.680 - 8570.092: 81.9106% ( 31) 00:07:35.072 8570.092 - 8620.505: 82.1174% ( 36) 00:07:35.072 8620.505 - 8670.917: 82.3759% ( 45) 00:07:35.072 8670.917 - 8721.329: 82.5195% ( 25) 00:07:35.072 8721.329 - 8771.742: 82.7263% ( 36) 00:07:35.072 8771.742 - 8822.154: 82.9619% ( 41) 00:07:35.072 8822.154 - 8872.566: 83.2663% ( 53) 00:07:35.072 8872.566 - 8922.978: 83.5765% ( 54) 00:07:35.072 8922.978 - 8973.391: 83.8523% ( 48) 00:07:35.072 8973.391 - 9023.803: 84.1395% ( 50) 00:07:35.072 9023.803 - 9074.215: 84.4497% ( 54) 00:07:35.072 9074.215 - 9124.628: 84.8231% ( 65) 00:07:35.072 9124.628 - 9175.040: 85.2080% ( 67) 00:07:35.072 9175.040 - 9225.452: 85.6101% ( 70) 00:07:35.072 9225.452 - 9275.865: 86.0466% ( 76) 00:07:35.072 9275.865 - 9326.277: 86.4947% ( 78) 00:07:35.072 9326.277 - 9376.689: 86.9370% ( 77) 00:07:35.072 9376.689 - 9427.102: 87.4253% ( 85) 00:07:35.072 9427.102 - 9477.514: 87.9825% ( 97) 00:07:35.072 9477.514 - 9527.926: 88.5455% ( 98) 00:07:35.072 9527.926 - 9578.338: 89.0395% ( 86) 00:07:35.072 9578.338 - 9628.751: 89.4072% ( 64) 00:07:35.072 9628.751 - 9679.163: 89.7748% ( 64) 00:07:35.072 9679.163 - 9729.575: 90.0563% ( 49) 00:07:35.072 9729.575 - 9779.988: 90.3091% ( 44) 00:07:35.072 9779.988 - 9830.400: 90.4814% ( 30) 00:07:35.072 9830.400 - 9880.812: 90.6020% ( 21) 00:07:35.072 9880.812 - 9931.225: 90.7227% ( 21) 00:07:35.072 9931.225 - 9981.637: 90.9007% ( 31) 00:07:35.072 9981.637 - 10032.049: 90.9984% ( 17) 00:07:35.072 10032.049 - 10082.462: 91.1075% ( 19) 00:07:35.072 10082.462 - 10132.874: 91.2224% ( 20) 00:07:35.072 10132.874 - 10183.286: 91.3028% ( 14) 00:07:35.072 10183.286 - 10233.698: 91.4120% ( 19) 00:07:35.072 10233.698 - 10284.111: 91.5556% ( 25) 00:07:35.072 10284.111 - 10334.523: 91.6475% ( 16) 00:07:35.072 10334.523 - 10384.935: 91.7279% ( 14) 00:07:35.072 10384.935 - 10435.348: 91.7969% ( 12) 00:07:35.072 10435.348 - 10485.760: 91.8716% ( 13) 00:07:35.072 10485.760 - 10536.172: 91.9462% ( 13) 00:07:35.072 10536.172 - 10586.585: 92.0209% ( 13) 00:07:35.072 10586.585 - 10636.997: 92.1128% ( 16) 00:07:35.072 10636.997 - 10687.409: 92.1932% ( 14) 00:07:35.072 10687.409 - 10737.822: 92.2507% ( 10) 00:07:35.072 10737.822 - 10788.234: 92.3139% ( 11) 00:07:35.072 10788.234 - 10838.646: 92.3656% ( 9) 00:07:35.072 10838.646 - 10889.058: 92.4460% ( 14) 00:07:35.072 10889.058 - 10939.471: 92.5149% ( 12) 00:07:35.072 10939.471 - 10989.883: 92.6356% ( 21) 00:07:35.072 10989.883 - 11040.295: 92.8653% ( 40) 00:07:35.072 11040.295 - 11090.708: 93.0377% ( 30) 00:07:35.072 11090.708 - 11141.120: 93.1756% ( 24) 00:07:35.072 11141.120 - 11191.532: 93.2790% ( 18) 00:07:35.072 11191.532 - 11241.945: 93.3938% ( 20) 00:07:35.072 11241.945 - 11292.357: 93.5489% ( 27) 00:07:35.072 11292.357 - 11342.769: 93.7902% ( 42) 00:07:35.072 11342.769 - 11393.182: 93.9453% ( 27) 00:07:35.072 11393.182 - 11443.594: 94.0602% ( 20) 00:07:35.072 11443.594 - 11494.006: 94.1579% ( 17) 00:07:35.072 11494.006 - 11544.418: 94.2325% ( 13) 00:07:35.072 11544.418 - 11594.831: 94.3130% ( 14) 00:07:35.072 11594.831 - 11645.243: 94.3704% ( 10) 00:07:35.072 11645.243 - 11695.655: 94.4393% ( 12) 00:07:35.072 11695.655 - 11746.068: 94.5083% ( 12) 00:07:35.072 11746.068 - 11796.480: 94.5427% ( 6) 00:07:35.072 11796.480 - 11846.892: 94.5887% ( 8) 00:07:35.072 11846.892 - 11897.305: 94.6347% ( 8) 00:07:35.072 11897.305 - 11947.717: 94.6634% ( 5) 00:07:35.072 11947.717 - 11998.129: 94.7323% ( 12) 00:07:35.072 11998.129 - 12048.542: 94.8185% ( 15) 00:07:35.072 12048.542 - 12098.954: 94.8817% ( 11) 00:07:35.072 12098.954 - 12149.366: 94.9563% ( 13) 00:07:35.072 12149.366 - 12199.778: 95.1689% ( 37) 00:07:35.072 12199.778 - 12250.191: 95.3010% ( 23) 00:07:35.072 12250.191 - 12300.603: 95.3699% ( 12) 00:07:35.072 12300.603 - 12351.015: 95.4044% ( 6) 00:07:35.072 12351.015 - 12401.428: 95.4331% ( 5) 00:07:35.072 12401.428 - 12451.840: 95.4676% ( 6) 00:07:35.072 12451.840 - 12502.252: 95.4906% ( 4) 00:07:35.072 12502.252 - 12552.665: 95.5595% ( 12) 00:07:35.072 12552.665 - 12603.077: 95.5882% ( 5) 00:07:35.072 12603.077 - 12653.489: 95.6112% ( 4) 00:07:35.072 12653.489 - 12703.902: 95.6514% ( 7) 00:07:35.072 12703.902 - 12754.314: 95.6916% ( 7) 00:07:35.072 12754.314 - 12804.726: 95.7261% ( 6) 00:07:35.072 12804.726 - 12855.138: 95.7778% ( 9) 00:07:35.072 12855.138 - 12905.551: 95.8180% ( 7) 00:07:35.072 12905.551 - 13006.375: 95.9214% ( 18) 00:07:35.072 13006.375 - 13107.200: 96.0938% ( 30) 00:07:35.072 13107.200 - 13208.025: 96.3752% ( 49) 00:07:35.072 13208.025 - 13308.849: 96.5648% ( 33) 00:07:35.072 13308.849 - 13409.674: 96.8061% ( 42) 00:07:35.072 13409.674 - 13510.498: 97.0071% ( 35) 00:07:35.072 13510.498 - 13611.323: 97.2197% ( 37) 00:07:35.072 13611.323 - 13712.148: 97.4609% ( 42) 00:07:35.072 13712.148 - 13812.972: 97.6562% ( 34) 00:07:35.072 13812.972 - 13913.797: 97.9779% ( 56) 00:07:35.072 13913.797 - 14014.622: 98.1503% ( 30) 00:07:35.072 14014.622 - 14115.446: 98.3571% ( 36) 00:07:35.072 14115.446 - 14216.271: 98.5237% ( 29) 00:07:35.072 14216.271 - 14317.095: 98.7075% ( 32) 00:07:35.072 14317.095 - 14417.920: 98.7592% ( 9) 00:07:35.072 14417.920 - 14518.745: 98.8109% ( 9) 00:07:35.072 14518.745 - 14619.569: 98.8683% ( 10) 00:07:35.072 14619.569 - 14720.394: 98.8971% ( 5) 00:07:35.072 14720.394 - 14821.218: 98.9085% ( 2) 00:07:35.072 14821.218 - 14922.043: 98.9660% ( 10) 00:07:35.072 14922.043 - 15022.868: 99.1958% ( 40) 00:07:35.072 15022.868 - 15123.692: 99.2360% ( 7) 00:07:35.072 15123.692 - 15224.517: 99.2647% ( 5) 00:07:35.072 16736.886 - 16837.711: 99.2762% ( 2) 00:07:35.072 16837.711 - 16938.535: 99.2992% ( 4) 00:07:35.072 16938.535 - 17039.360: 99.3222% ( 4) 00:07:35.072 17039.360 - 17140.185: 99.3451% ( 4) 00:07:35.072 17140.185 - 17241.009: 99.3681% ( 4) 00:07:35.072 17241.009 - 17341.834: 99.3968% ( 5) 00:07:35.072 17341.834 - 17442.658: 99.4198% ( 4) 00:07:35.072 17442.658 - 17543.483: 99.4428% ( 4) 00:07:35.072 17543.483 - 17644.308: 99.4658% ( 4) 00:07:35.072 17644.308 - 17745.132: 99.4887% ( 4) 00:07:35.072 17745.132 - 17845.957: 99.5117% ( 4) 00:07:35.072 17845.957 - 17946.782: 99.5347% ( 4) 00:07:35.072 17946.782 - 18047.606: 99.5577% ( 4) 00:07:35.072 18047.606 - 18148.431: 99.5864% ( 5) 00:07:35.072 18148.431 - 18249.255: 99.6094% ( 4) 00:07:35.072 18249.255 - 18350.080: 99.6324% ( 4) 00:07:35.072 20769.871 - 20870.695: 99.6438% ( 2) 00:07:35.072 20870.695 - 20971.520: 99.6668% ( 4) 00:07:35.072 20971.520 - 21072.345: 99.6898% ( 4) 00:07:35.072 21072.345 - 21173.169: 99.7128% ( 4) 00:07:35.072 21173.169 - 21273.994: 99.7358% ( 4) 00:07:35.072 21273.994 - 21374.818: 99.7587% ( 4) 00:07:35.072 21374.818 - 21475.643: 99.7875% ( 5) 00:07:35.072 21475.643 - 21576.468: 99.8104% ( 4) 00:07:35.072 21576.468 - 21677.292: 99.8277% ( 3) 00:07:35.072 21677.292 - 21778.117: 99.8564% ( 5) 00:07:35.072 21778.117 - 21878.942: 99.8794% ( 4) 00:07:35.072 21878.942 - 21979.766: 99.9023% ( 4) 00:07:35.072 21979.766 - 22080.591: 99.9311% ( 5) 00:07:35.072 22080.591 - 22181.415: 99.9540% ( 4) 00:07:35.072 22181.415 - 22282.240: 99.9770% ( 4) 00:07:35.072 22282.240 - 22383.065: 100.0000% ( 4) 00:07:35.072 00:07:35.072 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:35.072 ============================================================================== 00:07:35.072 Range in us Cumulative IO count 00:07:35.072 5293.292 - 5318.498: 0.0057% ( 1) 00:07:35.072 5318.498 - 5343.705: 0.0114% ( 1) 00:07:35.072 5343.705 - 5368.911: 0.0172% ( 1) 00:07:35.072 5394.117 - 5419.323: 0.0343% ( 3) 00:07:35.072 5419.323 - 5444.529: 0.0401% ( 1) 00:07:35.072 5469.735 - 5494.942: 0.0515% ( 2) 00:07:35.072 5494.942 - 5520.148: 0.0687% ( 3) 00:07:35.072 5520.148 - 5545.354: 0.0859% ( 3) 00:07:35.073 5545.354 - 5570.560: 0.1145% ( 5) 00:07:35.073 5570.560 - 5595.766: 0.1660% ( 9) 00:07:35.073 5595.766 - 5620.972: 0.2862% ( 21) 00:07:35.073 5620.972 - 5646.178: 0.4750% ( 33) 00:07:35.073 5646.178 - 5671.385: 0.6925% ( 38) 00:07:35.073 5671.385 - 5696.591: 1.3278% ( 111) 00:07:35.073 5696.591 - 5721.797: 2.0948% ( 134) 00:07:35.073 5721.797 - 5747.003: 3.2795% ( 207) 00:07:35.073 5747.003 - 5772.209: 4.8420% ( 273) 00:07:35.073 5772.209 - 5797.415: 5.8207% ( 171) 00:07:35.073 5797.415 - 5822.622: 6.8853% ( 186) 00:07:35.073 5822.622 - 5847.828: 7.9270% ( 182) 00:07:35.073 5847.828 - 5873.034: 8.8255% ( 157) 00:07:35.073 5873.034 - 5898.240: 10.1877% ( 238) 00:07:35.073 5898.240 - 5923.446: 11.2924% ( 193) 00:07:35.073 5923.446 - 5948.652: 12.0021% ( 124) 00:07:35.073 5948.652 - 5973.858: 13.3757% ( 240) 00:07:35.073 5973.858 - 5999.065: 15.0927% ( 300) 00:07:35.073 5999.065 - 6024.271: 16.1115% ( 178) 00:07:35.073 6024.271 - 6049.477: 17.7541% ( 287) 00:07:35.073 6049.477 - 6074.683: 19.2537% ( 262) 00:07:35.073 6074.683 - 6099.889: 21.0394% ( 312) 00:07:35.073 6099.889 - 6125.095: 22.5217% ( 259) 00:07:35.073 6125.095 - 6150.302: 24.4734% ( 341) 00:07:35.073 6150.302 - 6175.508: 26.5625% ( 365) 00:07:35.073 6175.508 - 6200.714: 28.6115% ( 358) 00:07:35.073 6200.714 - 6225.920: 30.7234% ( 369) 00:07:35.073 6225.920 - 6251.126: 33.3562% ( 460) 00:07:35.073 6251.126 - 6276.332: 34.5696% ( 212) 00:07:35.073 6276.332 - 6301.538: 36.2580% ( 295) 00:07:35.073 6301.538 - 6326.745: 37.7232% ( 256) 00:07:35.073 6326.745 - 6351.951: 39.3544% ( 285) 00:07:35.073 6351.951 - 6377.157: 40.7795% ( 249) 00:07:35.073 6377.157 - 6402.363: 43.1261% ( 410) 00:07:35.073 6402.363 - 6427.569: 45.3583% ( 390) 00:07:35.073 6427.569 - 6452.775: 47.4588% ( 367) 00:07:35.073 6452.775 - 6503.188: 52.3523% ( 855) 00:07:35.073 6503.188 - 6553.600: 55.9753% ( 633) 00:07:35.073 6553.600 - 6604.012: 58.9457% ( 519) 00:07:35.073 6604.012 - 6654.425: 61.1550% ( 386) 00:07:35.073 6654.425 - 6704.837: 63.2040% ( 358) 00:07:35.073 6704.837 - 6755.249: 65.2186% ( 352) 00:07:35.073 6755.249 - 6805.662: 66.5465% ( 232) 00:07:35.073 6805.662 - 6856.074: 67.8400% ( 226) 00:07:35.073 6856.074 - 6906.486: 69.0419% ( 210) 00:07:35.073 6906.486 - 6956.898: 69.9748% ( 163) 00:07:35.073 6956.898 - 7007.311: 70.8620% ( 155) 00:07:35.073 7007.311 - 7057.723: 71.4915% ( 110) 00:07:35.073 7057.723 - 7108.135: 71.9780% ( 85) 00:07:35.073 7108.135 - 7158.548: 72.8423% ( 151) 00:07:35.073 7158.548 - 7208.960: 73.5119% ( 117) 00:07:35.073 7208.960 - 7259.372: 74.5936% ( 189) 00:07:35.073 7259.372 - 7309.785: 75.2576% ( 116) 00:07:35.073 7309.785 - 7360.197: 75.9043% ( 113) 00:07:35.073 7360.197 - 7410.609: 76.4652% ( 98) 00:07:35.073 7410.609 - 7461.022: 76.9746% ( 89) 00:07:35.073 7461.022 - 7511.434: 77.4668% ( 86) 00:07:35.073 7511.434 - 7561.846: 77.9361% ( 82) 00:07:35.073 7561.846 - 7612.258: 78.3997% ( 81) 00:07:35.073 7612.258 - 7662.671: 78.7717% ( 65) 00:07:35.073 7662.671 - 7713.083: 79.2983% ( 92) 00:07:35.073 7713.083 - 7763.495: 79.5272% ( 40) 00:07:35.073 7763.495 - 7813.908: 79.6646% ( 24) 00:07:35.073 7813.908 - 7864.320: 79.8535% ( 33) 00:07:35.073 7864.320 - 7914.732: 80.0481% ( 34) 00:07:35.073 7914.732 - 7965.145: 80.2026% ( 27) 00:07:35.073 7965.145 - 8015.557: 80.3858% ( 32) 00:07:35.073 8015.557 - 8065.969: 80.4888% ( 18) 00:07:35.073 8065.969 - 8116.382: 80.6090% ( 21) 00:07:35.073 8116.382 - 8166.794: 80.7406% ( 23) 00:07:35.073 8166.794 - 8217.206: 80.8837% ( 25) 00:07:35.073 8217.206 - 8267.618: 81.1699% ( 50) 00:07:35.073 8267.618 - 8318.031: 81.3874% ( 38) 00:07:35.073 8318.031 - 8368.443: 81.4618% ( 13) 00:07:35.073 8368.443 - 8418.855: 81.5762% ( 20) 00:07:35.073 8418.855 - 8469.268: 81.7079% ( 23) 00:07:35.073 8469.268 - 8519.680: 81.9311% ( 39) 00:07:35.073 8519.680 - 8570.092: 82.0685% ( 24) 00:07:35.073 8570.092 - 8620.505: 82.2001% ( 23) 00:07:35.073 8620.505 - 8670.917: 82.3432% ( 25) 00:07:35.073 8670.917 - 8721.329: 82.6293% ( 50) 00:07:35.073 8721.329 - 8771.742: 82.8068% ( 31) 00:07:35.073 8771.742 - 8822.154: 83.0357% ( 40) 00:07:35.073 8822.154 - 8872.566: 83.3276% ( 51) 00:07:35.073 8872.566 - 8922.978: 83.5337% ( 36) 00:07:35.073 8922.978 - 8973.391: 83.7626% ( 40) 00:07:35.073 8973.391 - 9023.803: 84.1690% ( 71) 00:07:35.073 9023.803 - 9074.215: 84.5353% ( 64) 00:07:35.073 9074.215 - 9124.628: 84.9473% ( 72) 00:07:35.073 9124.628 - 9175.040: 85.4968% ( 96) 00:07:35.073 9175.040 - 9225.452: 85.9318% ( 76) 00:07:35.073 9225.452 - 9275.865: 86.3381% ( 71) 00:07:35.073 9275.865 - 9326.277: 86.8990% ( 98) 00:07:35.073 9326.277 - 9376.689: 87.2997% ( 70) 00:07:35.073 9376.689 - 9427.102: 87.6889% ( 68) 00:07:35.073 9427.102 - 9477.514: 88.2326% ( 95) 00:07:35.073 9477.514 - 9527.926: 88.6332% ( 70) 00:07:35.073 9527.926 - 9578.338: 89.1312% ( 87) 00:07:35.073 9578.338 - 9628.751: 89.4174% ( 50) 00:07:35.073 9628.751 - 9679.163: 89.6348% ( 38) 00:07:35.073 9679.163 - 9729.575: 89.8294% ( 34) 00:07:35.073 9729.575 - 9779.988: 90.0126% ( 32) 00:07:35.073 9779.988 - 9830.400: 90.1614% ( 26) 00:07:35.073 9830.400 - 9880.812: 90.2930% ( 23) 00:07:35.073 9880.812 - 9931.225: 90.4476% ( 27) 00:07:35.073 9931.225 - 9981.637: 90.7051% ( 45) 00:07:35.073 9981.637 - 10032.049: 90.8082% ( 18) 00:07:35.073 10032.049 - 10082.462: 90.8883% ( 14) 00:07:35.073 10082.462 - 10132.874: 90.9512% ( 11) 00:07:35.073 10132.874 - 10183.286: 91.0256% ( 13) 00:07:35.073 10183.286 - 10233.698: 91.1000% ( 13) 00:07:35.073 10233.698 - 10284.111: 91.2031% ( 18) 00:07:35.073 10284.111 - 10334.523: 91.3519% ( 26) 00:07:35.073 10334.523 - 10384.935: 91.4091% ( 10) 00:07:35.073 10384.935 - 10435.348: 91.5751% ( 29) 00:07:35.073 10435.348 - 10485.760: 91.6438% ( 12) 00:07:35.073 10485.760 - 10536.172: 91.7067% ( 11) 00:07:35.073 10536.172 - 10586.585: 91.7640% ( 10) 00:07:35.073 10586.585 - 10636.997: 91.8555% ( 16) 00:07:35.073 10636.997 - 10687.409: 91.9242% ( 12) 00:07:35.073 10687.409 - 10737.822: 91.9872% ( 11) 00:07:35.073 10737.822 - 10788.234: 92.0330% ( 8) 00:07:35.073 10788.234 - 10838.646: 92.0788% ( 8) 00:07:35.073 10838.646 - 10889.058: 92.1245% ( 8) 00:07:35.073 10889.058 - 10939.471: 92.1989% ( 13) 00:07:35.073 10939.471 - 10989.883: 92.2676% ( 12) 00:07:35.073 10989.883 - 11040.295: 92.3363% ( 12) 00:07:35.073 11040.295 - 11090.708: 92.4336% ( 17) 00:07:35.073 11090.708 - 11141.120: 92.6625% ( 40) 00:07:35.073 11141.120 - 11191.532: 92.7427% ( 14) 00:07:35.073 11191.532 - 11241.945: 92.9258% ( 32) 00:07:35.073 11241.945 - 11292.357: 93.1147% ( 33) 00:07:35.073 11292.357 - 11342.769: 93.2521% ( 24) 00:07:35.073 11342.769 - 11393.182: 93.5497% ( 52) 00:07:35.073 11393.182 - 11443.594: 93.6813% ( 23) 00:07:35.073 11443.594 - 11494.006: 93.7672% ( 15) 00:07:35.073 11494.006 - 11544.418: 93.8473% ( 14) 00:07:35.073 11544.418 - 11594.831: 93.9618% ( 20) 00:07:35.073 11594.831 - 11645.243: 94.0591% ( 17) 00:07:35.073 11645.243 - 11695.655: 94.1106% ( 9) 00:07:35.073 11695.655 - 11746.068: 94.1850% ( 13) 00:07:35.073 11746.068 - 11796.480: 94.2422% ( 10) 00:07:35.073 11796.480 - 11846.892: 94.2823% ( 7) 00:07:35.073 11846.892 - 11897.305: 94.3338% ( 9) 00:07:35.073 11897.305 - 11947.717: 94.3910% ( 10) 00:07:35.073 11947.717 - 11998.129: 94.4425% ( 9) 00:07:35.073 11998.129 - 12048.542: 94.4940% ( 9) 00:07:35.073 12048.542 - 12098.954: 94.5627% ( 12) 00:07:35.073 12098.954 - 12149.366: 94.6658% ( 18) 00:07:35.073 12149.366 - 12199.778: 94.7688% ( 18) 00:07:35.073 12199.778 - 12250.191: 94.8661% ( 17) 00:07:35.073 12250.191 - 12300.603: 94.9462% ( 14) 00:07:35.073 12300.603 - 12351.015: 95.0607% ( 20) 00:07:35.073 12351.015 - 12401.428: 95.1465% ( 15) 00:07:35.073 12401.428 - 12451.840: 95.2324% ( 15) 00:07:35.073 12451.840 - 12502.252: 95.3239% ( 16) 00:07:35.073 12502.252 - 12552.665: 95.4155% ( 16) 00:07:35.073 12552.665 - 12603.077: 95.4785% ( 11) 00:07:35.073 12603.077 - 12653.489: 95.5414% ( 11) 00:07:35.073 12653.489 - 12703.902: 95.6387% ( 17) 00:07:35.073 12703.902 - 12754.314: 95.7246% ( 15) 00:07:35.073 12754.314 - 12804.726: 95.8505% ( 22) 00:07:35.073 12804.726 - 12855.138: 95.9764% ( 22) 00:07:35.073 12855.138 - 12905.551: 96.0737% ( 17) 00:07:35.073 12905.551 - 13006.375: 96.2168% ( 25) 00:07:35.073 13006.375 - 13107.200: 96.4228% ( 36) 00:07:35.073 13107.200 - 13208.025: 96.8006% ( 66) 00:07:35.073 13208.025 - 13308.849: 97.3100% ( 89) 00:07:35.073 13308.849 - 13409.674: 97.6305% ( 56) 00:07:35.073 13409.674 - 13510.498: 98.0769% ( 78) 00:07:35.073 13510.498 - 13611.323: 98.2944% ( 38) 00:07:35.073 13611.323 - 13712.148: 98.4489% ( 27) 00:07:35.074 13712.148 - 13812.972: 98.5920% ( 25) 00:07:35.074 13812.972 - 13913.797: 98.7122% ( 21) 00:07:35.074 13913.797 - 14014.622: 98.8267% ( 20) 00:07:35.074 14014.622 - 14115.446: 98.8610% ( 6) 00:07:35.074 14115.446 - 14216.271: 98.8897% ( 5) 00:07:35.074 14216.271 - 14317.095: 98.9412% ( 9) 00:07:35.074 14317.095 - 14417.920: 98.9927% ( 9) 00:07:35.074 14417.920 - 14518.745: 99.0556% ( 11) 00:07:35.074 14518.745 - 14619.569: 99.1529% ( 17) 00:07:35.074 14619.569 - 14720.394: 99.2159% ( 11) 00:07:35.074 14720.394 - 14821.218: 99.2674% ( 9) 00:07:35.074 14821.218 - 14922.043: 99.3361% ( 12) 00:07:35.074 14922.043 - 15022.868: 99.4048% ( 12) 00:07:35.074 15022.868 - 15123.692: 99.4734% ( 12) 00:07:35.074 15123.692 - 15224.517: 99.5021% ( 5) 00:07:35.074 15224.517 - 15325.342: 99.5707% ( 12) 00:07:35.074 15325.342 - 15426.166: 99.6108% ( 7) 00:07:35.074 15426.166 - 15526.991: 99.6337% ( 4) 00:07:35.074 16131.938 - 16232.763: 99.6566% ( 4) 00:07:35.074 16232.763 - 16333.588: 99.6795% ( 4) 00:07:35.074 16333.588 - 16434.412: 99.7024% ( 4) 00:07:35.074 16434.412 - 16535.237: 99.7253% ( 4) 00:07:35.074 16535.237 - 16636.062: 99.7482% ( 4) 00:07:35.074 16636.062 - 16736.886: 99.7768% ( 5) 00:07:35.074 16736.886 - 16837.711: 99.7997% ( 4) 00:07:35.074 16837.711 - 16938.535: 99.8226% ( 4) 00:07:35.074 16938.535 - 17039.360: 99.8455% ( 4) 00:07:35.074 17039.360 - 17140.185: 99.8741% ( 5) 00:07:35.074 17140.185 - 17241.009: 99.8970% ( 4) 00:07:35.074 17241.009 - 17341.834: 99.9141% ( 3) 00:07:35.074 17341.834 - 17442.658: 99.9370% ( 4) 00:07:35.074 17442.658 - 17543.483: 99.9657% ( 5) 00:07:35.074 17543.483 - 17644.308: 99.9886% ( 4) 00:07:35.074 17644.308 - 17745.132: 100.0000% ( 2) 00:07:35.074 00:07:35.074 23:28:22 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:35.074 00:07:35.074 real 0m2.509s 00:07:35.074 user 0m2.205s 00:07:35.074 sys 0m0.205s 00:07:35.074 23:28:22 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.074 23:28:22 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.074 ************************************ 00:07:35.074 END TEST nvme_perf 00:07:35.074 ************************************ 00:07:35.074 23:28:22 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:35.074 23:28:22 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:35.074 23:28:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.074 23:28:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.074 ************************************ 00:07:35.074 START TEST nvme_hello_world 00:07:35.074 ************************************ 00:07:35.074 23:28:22 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:35.074 Initializing NVMe Controllers 00:07:35.074 Attached to 0000:00:10.0 00:07:35.074 Namespace ID: 1 size: 6GB 00:07:35.074 Attached to 0000:00:11.0 00:07:35.074 Namespace ID: 1 size: 5GB 00:07:35.074 Attached to 0000:00:13.0 00:07:35.074 Namespace ID: 1 size: 1GB 00:07:35.074 Attached to 0000:00:12.0 00:07:35.074 Namespace ID: 1 size: 4GB 00:07:35.074 Namespace ID: 2 size: 4GB 00:07:35.074 Namespace ID: 3 size: 4GB 00:07:35.074 Initialization complete. 00:07:35.074 INFO: using host memory buffer for IO 00:07:35.074 Hello world! 00:07:35.074 INFO: using host memory buffer for IO 00:07:35.074 Hello world! 00:07:35.074 INFO: using host memory buffer for IO 00:07:35.074 Hello world! 00:07:35.074 INFO: using host memory buffer for IO 00:07:35.074 Hello world! 00:07:35.074 INFO: using host memory buffer for IO 00:07:35.074 Hello world! 00:07:35.074 INFO: using host memory buffer for IO 00:07:35.074 Hello world! 00:07:35.074 00:07:35.074 real 0m0.213s 00:07:35.074 user 0m0.072s 00:07:35.074 sys 0m0.100s 00:07:35.074 23:28:23 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.074 23:28:23 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:35.074 ************************************ 00:07:35.074 END TEST nvme_hello_world 00:07:35.074 ************************************ 00:07:35.074 23:28:23 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:35.074 23:28:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.074 23:28:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.074 23:28:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.074 ************************************ 00:07:35.074 START TEST nvme_sgl 00:07:35.074 ************************************ 00:07:35.074 23:28:23 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:35.333 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:35.333 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:35.334 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:35.334 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:35.334 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:35.334 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:35.334 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:35.334 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:35.334 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:35.334 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:35.334 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:35.334 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:35.334 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:35.334 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:35.334 NVMe Readv/Writev Request test 00:07:35.334 Attached to 0000:00:10.0 00:07:35.334 Attached to 0000:00:11.0 00:07:35.334 Attached to 0000:00:13.0 00:07:35.334 Attached to 0000:00:12.0 00:07:35.334 0000:00:10.0: build_io_request_2 test passed 00:07:35.334 0000:00:10.0: build_io_request_4 test passed 00:07:35.334 0000:00:10.0: build_io_request_5 test passed 00:07:35.334 0000:00:10.0: build_io_request_6 test passed 00:07:35.334 0000:00:10.0: build_io_request_7 test passed 00:07:35.334 0000:00:10.0: build_io_request_10 test passed 00:07:35.334 0000:00:11.0: build_io_request_2 test passed 00:07:35.334 0000:00:11.0: build_io_request_4 test passed 00:07:35.334 0000:00:11.0: build_io_request_5 test passed 00:07:35.334 0000:00:11.0: build_io_request_6 test passed 00:07:35.334 0000:00:11.0: build_io_request_7 test passed 00:07:35.334 0000:00:11.0: build_io_request_10 test passed 00:07:35.334 Cleaning up... 00:07:35.334 00:07:35.334 real 0m0.266s 00:07:35.334 user 0m0.140s 00:07:35.334 sys 0m0.085s 00:07:35.334 23:28:23 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.334 23:28:23 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:35.334 ************************************ 00:07:35.334 END TEST nvme_sgl 00:07:35.334 ************************************ 00:07:35.334 23:28:23 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:35.334 23:28:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.334 23:28:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.334 23:28:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.334 ************************************ 00:07:35.334 START TEST nvme_e2edp 00:07:35.334 ************************************ 00:07:35.334 23:28:23 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:35.594 NVMe Write/Read with End-to-End data protection test 00:07:35.594 Attached to 0000:00:10.0 00:07:35.594 Attached to 0000:00:11.0 00:07:35.594 Attached to 0000:00:13.0 00:07:35.594 Attached to 0000:00:12.0 00:07:35.594 Cleaning up... 00:07:35.594 00:07:35.594 real 0m0.197s 00:07:35.594 user 0m0.058s 00:07:35.594 sys 0m0.097s 00:07:35.594 23:28:23 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.594 23:28:23 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:35.594 ************************************ 00:07:35.594 END TEST nvme_e2edp 00:07:35.594 ************************************ 00:07:35.594 23:28:23 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:35.594 23:28:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.594 23:28:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.594 23:28:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.594 ************************************ 00:07:35.594 START TEST nvme_reserve 00:07:35.594 ************************************ 00:07:35.594 23:28:23 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:35.854 ===================================================== 00:07:35.854 NVMe Controller at PCI bus 0, device 16, function 0 00:07:35.854 ===================================================== 00:07:35.854 Reservations: Not Supported 00:07:35.854 ===================================================== 00:07:35.854 NVMe Controller at PCI bus 0, device 17, function 0 00:07:35.854 ===================================================== 00:07:35.854 Reservations: Not Supported 00:07:35.854 ===================================================== 00:07:35.854 NVMe Controller at PCI bus 0, device 19, function 0 00:07:35.854 ===================================================== 00:07:35.854 Reservations: Not Supported 00:07:35.854 ===================================================== 00:07:35.854 NVMe Controller at PCI bus 0, device 18, function 0 00:07:35.854 ===================================================== 00:07:35.854 Reservations: Not Supported 00:07:35.854 Reservation test passed 00:07:35.854 00:07:35.854 real 0m0.203s 00:07:35.854 user 0m0.062s 00:07:35.854 sys 0m0.099s 00:07:35.854 23:28:23 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.854 23:28:23 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:35.854 ************************************ 00:07:35.854 END TEST nvme_reserve 00:07:35.854 ************************************ 00:07:35.854 23:28:23 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:35.854 23:28:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.854 23:28:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.854 23:28:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.854 ************************************ 00:07:35.854 START TEST nvme_err_injection 00:07:35.854 ************************************ 00:07:35.854 23:28:23 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:36.115 NVMe Error Injection test 00:07:36.115 Attached to 0000:00:10.0 00:07:36.115 Attached to 0000:00:11.0 00:07:36.115 Attached to 0000:00:13.0 00:07:36.115 Attached to 0000:00:12.0 00:07:36.115 0000:00:12.0: get features failed as expected 00:07:36.115 0000:00:10.0: get features failed as expected 00:07:36.115 0000:00:11.0: get features failed as expected 00:07:36.115 0000:00:13.0: get features failed as expected 00:07:36.115 0000:00:10.0: get features successfully as expected 00:07:36.115 0000:00:11.0: get features successfully as expected 00:07:36.115 0000:00:13.0: get features successfully as expected 00:07:36.115 0000:00:12.0: get features successfully as expected 00:07:36.115 0000:00:10.0: read failed as expected 00:07:36.115 0000:00:11.0: read failed as expected 00:07:36.115 0000:00:13.0: read failed as expected 00:07:36.115 0000:00:12.0: read failed as expected 00:07:36.115 0000:00:10.0: read successfully as expected 00:07:36.115 0000:00:11.0: read successfully as expected 00:07:36.115 0000:00:13.0: read successfully as expected 00:07:36.115 0000:00:12.0: read successfully as expected 00:07:36.115 Cleaning up... 00:07:36.115 00:07:36.115 real 0m0.227s 00:07:36.115 user 0m0.074s 00:07:36.115 sys 0m0.107s 00:07:36.115 23:28:24 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.115 23:28:24 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:36.115 ************************************ 00:07:36.115 END TEST nvme_err_injection 00:07:36.115 ************************************ 00:07:36.115 23:28:24 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:36.115 23:28:24 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:36.115 23:28:24 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.115 23:28:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.115 ************************************ 00:07:36.115 START TEST nvme_overhead 00:07:36.115 ************************************ 00:07:36.115 23:28:24 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:37.502 Initializing NVMe Controllers 00:07:37.502 Attached to 0000:00:10.0 00:07:37.502 Attached to 0000:00:11.0 00:07:37.502 Attached to 0000:00:13.0 00:07:37.502 Attached to 0000:00:12.0 00:07:37.502 Initialization complete. Launching workers. 00:07:37.502 submit (in ns) avg, min, max = 11832.4, 9950.0, 332943.1 00:07:37.503 complete (in ns) avg, min, max = 7809.5, 7293.1, 83498.5 00:07:37.503 00:07:37.503 Submit histogram 00:07:37.503 ================ 00:07:37.503 Range in us Cumulative Count 00:07:37.503 9.945 - 9.994: 0.0130% ( 1) 00:07:37.503 10.043 - 10.092: 0.0259% ( 1) 00:07:37.503 10.092 - 10.142: 0.0389% ( 1) 00:07:37.503 10.142 - 10.191: 0.0519% ( 1) 00:07:37.503 10.585 - 10.634: 0.0649% ( 1) 00:07:37.503 10.978 - 11.028: 0.1297% ( 5) 00:07:37.503 11.028 - 11.077: 0.2335% ( 8) 00:07:37.503 11.077 - 11.126: 0.4410% ( 16) 00:07:37.503 11.126 - 11.175: 0.9857% ( 42) 00:07:37.503 11.175 - 11.225: 2.7237% ( 134) 00:07:37.503 11.225 - 11.274: 5.4734% ( 212) 00:07:37.503 11.274 - 11.323: 11.1284% ( 436) 00:07:37.503 11.323 - 11.372: 19.3904% ( 637) 00:07:37.503 11.372 - 11.422: 30.5447% ( 860) 00:07:37.503 11.422 - 11.471: 42.8664% ( 950) 00:07:37.503 11.471 - 11.520: 53.6187% ( 829) 00:07:37.503 11.520 - 11.569: 62.6200% ( 694) 00:07:37.503 11.569 - 11.618: 69.4293% ( 525) 00:07:37.503 11.618 - 11.668: 74.6822% ( 405) 00:07:37.503 11.668 - 11.717: 79.2348% ( 351) 00:07:37.503 11.717 - 11.766: 82.5681% ( 257) 00:07:37.503 11.766 - 11.815: 84.7341% ( 167) 00:07:37.503 11.815 - 11.865: 86.1089% ( 106) 00:07:37.503 11.865 - 11.914: 87.3281% ( 94) 00:07:37.503 11.914 - 11.963: 88.3917% ( 82) 00:07:37.503 11.963 - 12.012: 89.0791% ( 53) 00:07:37.503 12.012 - 12.062: 89.7536% ( 52) 00:07:37.503 12.062 - 12.111: 90.2983% ( 42) 00:07:37.503 12.111 - 12.160: 90.7263% ( 33) 00:07:37.503 12.160 - 12.209: 91.2192% ( 38) 00:07:37.503 12.209 - 12.258: 91.6213% ( 31) 00:07:37.503 12.258 - 12.308: 92.0104% ( 30) 00:07:37.503 12.308 - 12.357: 92.3735% ( 28) 00:07:37.503 12.357 - 12.406: 92.8275% ( 35) 00:07:37.503 12.406 - 12.455: 93.1518% ( 25) 00:07:37.503 12.455 - 12.505: 93.3722% ( 17) 00:07:37.503 12.505 - 12.554: 93.5149% ( 11) 00:07:37.503 12.554 - 12.603: 93.7484% ( 18) 00:07:37.503 12.603 - 12.702: 93.9559% ( 16) 00:07:37.503 12.702 - 12.800: 94.1375% ( 14) 00:07:37.503 12.800 - 12.898: 94.4358% ( 23) 00:07:37.503 12.898 - 12.997: 94.5655% ( 10) 00:07:37.503 12.997 - 13.095: 94.6563% ( 7) 00:07:37.503 13.095 - 13.194: 94.7211% ( 5) 00:07:37.503 13.194 - 13.292: 94.7471% ( 2) 00:07:37.503 13.292 - 13.391: 94.7990% ( 4) 00:07:37.503 13.391 - 13.489: 94.8638% ( 5) 00:07:37.503 13.489 - 13.588: 95.0454% ( 14) 00:07:37.503 13.588 - 13.686: 95.2010% ( 12) 00:07:37.503 13.686 - 13.785: 95.4604% ( 20) 00:07:37.503 13.785 - 13.883: 95.7588% ( 23) 00:07:37.503 13.883 - 13.982: 96.1997% ( 34) 00:07:37.503 13.982 - 14.080: 96.4721% ( 21) 00:07:37.503 14.080 - 14.178: 96.7185% ( 19) 00:07:37.503 14.178 - 14.277: 96.9261% ( 16) 00:07:37.503 14.277 - 14.375: 97.0428% ( 9) 00:07:37.503 14.375 - 14.474: 97.1466% ( 8) 00:07:37.503 14.474 - 14.572: 97.2503% ( 8) 00:07:37.503 14.572 - 14.671: 97.3541% ( 8) 00:07:37.503 14.671 - 14.769: 97.4319% ( 6) 00:07:37.503 14.769 - 14.868: 97.4838% ( 4) 00:07:37.503 14.868 - 14.966: 97.5746% ( 7) 00:07:37.503 14.966 - 15.065: 97.6005% ( 2) 00:07:37.503 15.065 - 15.163: 97.6654% ( 5) 00:07:37.503 15.163 - 15.262: 97.7043% ( 3) 00:07:37.503 15.262 - 15.360: 97.7302% ( 2) 00:07:37.503 15.360 - 15.458: 97.7562% ( 2) 00:07:37.503 15.458 - 15.557: 97.7951% ( 3) 00:07:37.503 15.557 - 15.655: 97.8340% ( 3) 00:07:37.503 15.655 - 15.754: 97.8859% ( 4) 00:07:37.503 15.754 - 15.852: 97.8988% ( 1) 00:07:37.503 15.852 - 15.951: 97.9248% ( 2) 00:07:37.503 15.951 - 16.049: 98.0026% ( 6) 00:07:37.503 16.049 - 16.148: 98.0934% ( 7) 00:07:37.503 16.148 - 16.246: 98.1193% ( 2) 00:07:37.503 16.246 - 16.345: 98.1582% ( 3) 00:07:37.503 16.345 - 16.443: 98.1842% ( 2) 00:07:37.503 16.443 - 16.542: 98.1971% ( 1) 00:07:37.503 16.542 - 16.640: 98.2490% ( 4) 00:07:37.503 16.640 - 16.738: 98.2879% ( 3) 00:07:37.503 16.738 - 16.837: 98.3398% ( 4) 00:07:37.503 16.837 - 16.935: 98.3658% ( 2) 00:07:37.503 16.935 - 17.034: 98.3917% ( 2) 00:07:37.503 17.034 - 17.132: 98.4955% ( 8) 00:07:37.503 17.132 - 17.231: 98.5863% ( 7) 00:07:37.503 17.231 - 17.329: 98.6770% ( 7) 00:07:37.503 17.329 - 17.428: 98.8197% ( 11) 00:07:37.503 17.428 - 17.526: 98.8975% ( 6) 00:07:37.503 17.526 - 17.625: 99.0143% ( 9) 00:07:37.503 17.625 - 17.723: 99.1051% ( 7) 00:07:37.503 17.723 - 17.822: 99.1440% ( 3) 00:07:37.503 17.822 - 17.920: 99.2348% ( 7) 00:07:37.503 17.920 - 18.018: 99.3256% ( 7) 00:07:37.503 18.018 - 18.117: 99.3645% ( 3) 00:07:37.503 18.117 - 18.215: 99.4163% ( 4) 00:07:37.503 18.215 - 18.314: 99.4553% ( 3) 00:07:37.503 18.314 - 18.412: 99.4942% ( 3) 00:07:37.503 18.511 - 18.609: 99.5071% ( 1) 00:07:37.503 18.609 - 18.708: 99.5201% ( 1) 00:07:37.503 18.708 - 18.806: 99.5331% ( 1) 00:07:37.503 18.806 - 18.905: 99.5460% ( 1) 00:07:37.503 18.905 - 19.003: 99.5720% ( 2) 00:07:37.503 19.003 - 19.102: 99.5850% ( 1) 00:07:37.503 19.102 - 19.200: 99.5979% ( 1) 00:07:37.503 19.200 - 19.298: 99.6368% ( 3) 00:07:37.503 19.495 - 19.594: 99.6498% ( 1) 00:07:37.503 19.594 - 19.692: 99.6757% ( 2) 00:07:37.503 19.692 - 19.791: 99.6887% ( 1) 00:07:37.503 19.988 - 20.086: 99.7017% ( 1) 00:07:37.503 20.185 - 20.283: 99.7147% ( 1) 00:07:37.503 20.480 - 20.578: 99.7276% ( 1) 00:07:37.503 21.071 - 21.169: 99.7406% ( 1) 00:07:37.503 21.662 - 21.760: 99.7536% ( 1) 00:07:37.503 21.760 - 21.858: 99.7665% ( 1) 00:07:37.503 21.858 - 21.957: 99.7795% ( 1) 00:07:37.503 21.957 - 22.055: 99.7925% ( 1) 00:07:37.503 22.252 - 22.351: 99.8054% ( 1) 00:07:37.503 22.548 - 22.646: 99.8184% ( 1) 00:07:37.503 23.729 - 23.828: 99.8314% ( 1) 00:07:37.503 23.828 - 23.926: 99.8444% ( 1) 00:07:37.503 23.926 - 24.025: 99.8573% ( 1) 00:07:37.503 24.320 - 24.418: 99.8703% ( 1) 00:07:37.503 24.615 - 24.714: 99.8833% ( 1) 00:07:37.503 25.206 - 25.403: 99.8962% ( 1) 00:07:37.503 29.932 - 30.129: 99.9092% ( 1) 00:07:37.503 31.114 - 31.311: 99.9222% ( 1) 00:07:37.503 32.689 - 32.886: 99.9351% ( 1) 00:07:37.503 40.763 - 40.960: 99.9481% ( 1) 00:07:37.503 42.929 - 43.126: 99.9611% ( 1) 00:07:37.503 65.772 - 66.166: 99.9870% ( 2) 00:07:37.503 332.406 - 333.982: 100.0000% ( 1) 00:07:37.503 00:07:37.503 Complete histogram 00:07:37.503 ================== 00:07:37.503 Range in us Cumulative Count 00:07:37.503 7.286 - 7.335: 0.0519% ( 4) 00:07:37.503 7.335 - 7.385: 0.4280% ( 29) 00:07:37.503 7.385 - 7.434: 1.8936% ( 113) 00:07:37.503 7.434 - 7.483: 7.0817% ( 400) 00:07:37.503 7.483 - 7.532: 16.1868% ( 702) 00:07:37.503 7.532 - 7.582: 30.8949% ( 1134) 00:07:37.503 7.582 - 7.631: 48.2490% ( 1338) 00:07:37.503 7.631 - 7.680: 62.4514% ( 1095) 00:07:37.503 7.680 - 7.729: 73.3722% ( 842) 00:07:37.503 7.729 - 7.778: 81.2840% ( 610) 00:07:37.503 7.778 - 7.828: 86.7445% ( 421) 00:07:37.503 7.828 - 7.877: 90.2594% ( 271) 00:07:37.503 7.877 - 7.926: 92.2827% ( 156) 00:07:37.503 7.926 - 7.975: 93.8003% ( 117) 00:07:37.503 7.975 - 8.025: 94.5396% ( 57) 00:07:37.503 8.025 - 8.074: 95.0065% ( 36) 00:07:37.503 8.074 - 8.123: 95.2659% ( 20) 00:07:37.503 8.123 - 8.172: 95.4475% ( 14) 00:07:37.503 8.172 - 8.222: 95.6031% ( 12) 00:07:37.503 8.222 - 8.271: 95.7458% ( 11) 00:07:37.503 8.271 - 8.320: 95.9014% ( 12) 00:07:37.503 8.320 - 8.369: 96.1738% ( 21) 00:07:37.503 8.369 - 8.418: 96.3424% ( 13) 00:07:37.503 8.418 - 8.468: 96.4591% ( 9) 00:07:37.503 8.468 - 8.517: 96.6018% ( 11) 00:07:37.503 8.517 - 8.566: 96.7315% ( 10) 00:07:37.503 8.566 - 8.615: 96.7964% ( 5) 00:07:37.503 8.615 - 8.665: 96.9650% ( 13) 00:07:37.503 8.665 - 8.714: 97.0947% ( 10) 00:07:37.503 8.714 - 8.763: 97.1855% ( 7) 00:07:37.503 8.763 - 8.812: 97.2374% ( 4) 00:07:37.503 8.812 - 8.862: 97.2633% ( 2) 00:07:37.503 8.862 - 8.911: 97.2763% ( 1) 00:07:37.503 8.911 - 8.960: 97.3152% ( 3) 00:07:37.503 8.960 - 9.009: 97.3671% ( 4) 00:07:37.503 9.009 - 9.058: 97.4060% ( 3) 00:07:37.503 9.255 - 9.305: 97.4189% ( 1) 00:07:37.503 9.305 - 9.354: 97.4319% ( 1) 00:07:37.503 9.354 - 9.403: 97.5097% ( 6) 00:07:37.503 9.403 - 9.452: 97.5486% ( 3) 00:07:37.503 9.452 - 9.502: 97.6783% ( 10) 00:07:37.503 9.502 - 9.551: 97.7302% ( 4) 00:07:37.503 9.551 - 9.600: 97.8340% ( 8) 00:07:37.503 9.600 - 9.649: 97.9118% ( 6) 00:07:37.503 9.649 - 9.698: 97.9896% ( 6) 00:07:37.503 9.698 - 9.748: 98.0804% ( 7) 00:07:37.503 9.748 - 9.797: 98.1064% ( 2) 00:07:37.503 9.797 - 9.846: 98.1193% ( 1) 00:07:37.503 9.846 - 9.895: 98.1453% ( 2) 00:07:37.503 9.945 - 9.994: 98.1712% ( 2) 00:07:37.503 10.043 - 10.092: 98.1842% ( 1) 00:07:37.503 10.092 - 10.142: 98.2101% ( 2) 00:07:37.504 10.142 - 10.191: 98.2231% ( 1) 00:07:37.504 10.240 - 10.289: 98.2361% ( 1) 00:07:37.504 10.437 - 10.486: 98.2490% ( 1) 00:07:37.504 10.486 - 10.535: 98.2620% ( 1) 00:07:37.504 10.535 - 10.585: 98.2750% ( 1) 00:07:37.504 10.585 - 10.634: 98.3009% ( 2) 00:07:37.504 10.634 - 10.683: 98.3398% ( 3) 00:07:37.504 10.683 - 10.732: 98.3528% ( 1) 00:07:37.504 10.732 - 10.782: 98.3658% ( 1) 00:07:37.504 10.831 - 10.880: 98.3917% ( 2) 00:07:37.504 10.880 - 10.929: 98.4176% ( 2) 00:07:37.504 10.929 - 10.978: 98.4306% ( 1) 00:07:37.504 10.978 - 11.028: 98.4436% ( 1) 00:07:37.504 11.028 - 11.077: 98.4565% ( 1) 00:07:37.504 11.077 - 11.126: 98.4825% ( 2) 00:07:37.504 11.126 - 11.175: 98.4955% ( 1) 00:07:37.504 11.175 - 11.225: 98.5084% ( 1) 00:07:37.504 11.274 - 11.323: 98.5214% ( 1) 00:07:37.504 11.323 - 11.372: 98.5344% ( 1) 00:07:37.504 11.372 - 11.422: 98.5473% ( 1) 00:07:37.504 11.520 - 11.569: 98.5603% ( 1) 00:07:37.504 11.569 - 11.618: 98.5733% ( 1) 00:07:37.504 11.668 - 11.717: 98.5992% ( 2) 00:07:37.504 12.062 - 12.111: 98.6122% ( 1) 00:07:37.504 12.160 - 12.209: 98.6252% ( 1) 00:07:37.504 12.997 - 13.095: 98.6381% ( 1) 00:07:37.504 13.095 - 13.194: 98.6770% ( 3) 00:07:37.504 13.194 - 13.292: 98.7549% ( 6) 00:07:37.504 13.292 - 13.391: 98.8197% ( 5) 00:07:37.504 13.391 - 13.489: 98.9235% ( 8) 00:07:37.504 13.489 - 13.588: 99.0013% ( 6) 00:07:37.504 13.588 - 13.686: 99.0921% ( 7) 00:07:37.504 13.686 - 13.785: 99.1569% ( 5) 00:07:37.504 13.785 - 13.883: 99.2477% ( 7) 00:07:37.504 13.883 - 13.982: 99.3126% ( 5) 00:07:37.504 13.982 - 14.080: 99.3645% ( 4) 00:07:37.504 14.080 - 14.178: 99.4163% ( 4) 00:07:37.504 14.178 - 14.277: 99.4553% ( 3) 00:07:37.504 14.277 - 14.375: 99.4942% ( 3) 00:07:37.504 14.375 - 14.474: 99.5590% ( 5) 00:07:37.504 14.474 - 14.572: 99.5850% ( 2) 00:07:37.504 14.572 - 14.671: 99.6239% ( 3) 00:07:37.504 14.671 - 14.769: 99.6368% ( 1) 00:07:37.504 14.769 - 14.868: 99.6498% ( 1) 00:07:37.504 14.868 - 14.966: 99.6757% ( 2) 00:07:37.504 15.163 - 15.262: 99.6887% ( 1) 00:07:37.504 15.262 - 15.360: 99.7147% ( 2) 00:07:37.504 15.458 - 15.557: 99.7276% ( 1) 00:07:37.504 15.655 - 15.754: 99.7406% ( 1) 00:07:37.504 16.049 - 16.148: 99.7536% ( 1) 00:07:37.504 16.148 - 16.246: 99.7665% ( 1) 00:07:37.504 16.542 - 16.640: 99.7925% ( 2) 00:07:37.504 16.640 - 16.738: 99.8054% ( 1) 00:07:37.504 17.329 - 17.428: 99.8184% ( 1) 00:07:37.504 17.526 - 17.625: 99.8314% ( 1) 00:07:37.504 18.511 - 18.609: 99.8444% ( 1) 00:07:37.504 18.905 - 19.003: 99.8573% ( 1) 00:07:37.504 19.003 - 19.102: 99.8703% ( 1) 00:07:37.504 19.692 - 19.791: 99.8833% ( 1) 00:07:37.504 19.889 - 19.988: 99.8962% ( 1) 00:07:37.504 20.086 - 20.185: 99.9092% ( 1) 00:07:37.504 20.283 - 20.382: 99.9222% ( 1) 00:07:37.504 25.206 - 25.403: 99.9351% ( 1) 00:07:37.504 29.538 - 29.735: 99.9481% ( 1) 00:07:37.504 31.705 - 31.902: 99.9611% ( 1) 00:07:37.504 36.234 - 36.431: 99.9741% ( 1) 00:07:37.504 46.868 - 47.065: 99.9870% ( 1) 00:07:37.504 83.495 - 83.889: 100.0000% ( 1) 00:07:37.504 00:07:37.504 00:07:37.504 real 0m1.199s 00:07:37.504 user 0m1.057s 00:07:37.504 sys 0m0.090s 00:07:37.504 23:28:25 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.504 ************************************ 00:07:37.504 END TEST nvme_overhead 00:07:37.504 23:28:25 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:37.504 ************************************ 00:07:37.504 23:28:25 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:37.504 23:28:25 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:37.504 23:28:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.504 23:28:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.504 ************************************ 00:07:37.504 START TEST nvme_arbitration 00:07:37.504 ************************************ 00:07:37.504 23:28:25 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:40.803 Initializing NVMe Controllers 00:07:40.803 Attached to 0000:00:10.0 00:07:40.803 Attached to 0000:00:11.0 00:07:40.803 Attached to 0000:00:13.0 00:07:40.803 Attached to 0000:00:12.0 00:07:40.803 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:40.803 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:40.803 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:40.803 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:40.803 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:40.803 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:40.803 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:40.803 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:40.803 Initialization complete. Launching workers. 00:07:40.803 Starting thread on core 1 with urgent priority queue 00:07:40.803 Starting thread on core 2 with urgent priority queue 00:07:40.803 Starting thread on core 3 with urgent priority queue 00:07:40.804 Starting thread on core 0 with urgent priority queue 00:07:40.804 QEMU NVMe Ctrl (12340 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:07:40.804 QEMU NVMe Ctrl (12342 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:07:40.804 QEMU NVMe Ctrl (12341 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:07:40.804 QEMU NVMe Ctrl (12342 ) core 1: 853.33 IO/s 117.19 secs/100000 ios 00:07:40.804 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:07:40.804 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:07:40.804 ======================================================== 00:07:40.804 00:07:40.804 00:07:40.804 real 0m3.314s 00:07:40.804 user 0m9.279s 00:07:40.804 sys 0m0.104s 00:07:40.804 23:28:28 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.804 ************************************ 00:07:40.804 23:28:28 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:40.804 END TEST nvme_arbitration 00:07:40.804 ************************************ 00:07:40.804 23:28:28 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:40.804 23:28:28 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:40.804 23:28:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.804 23:28:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.804 ************************************ 00:07:40.804 START TEST nvme_single_aen 00:07:40.804 ************************************ 00:07:40.804 23:28:28 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:41.079 Asynchronous Event Request test 00:07:41.079 Attached to 0000:00:10.0 00:07:41.079 Attached to 0000:00:11.0 00:07:41.079 Attached to 0000:00:13.0 00:07:41.079 Attached to 0000:00:12.0 00:07:41.079 Reset controller to setup AER completions for this process 00:07:41.079 Registering asynchronous event callbacks... 00:07:41.079 Getting orig temperature thresholds of all controllers 00:07:41.079 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.079 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.079 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.079 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.079 Setting all controllers temperature threshold low to trigger AER 00:07:41.079 Waiting for all controllers temperature threshold to be set lower 00:07:41.079 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.079 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:41.079 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.079 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:41.079 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.080 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:41.080 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.080 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:41.080 Waiting for all controllers to trigger AER and reset threshold 00:07:41.080 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.080 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.080 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.080 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.080 Cleaning up... 00:07:41.080 00:07:41.080 real 0m0.223s 00:07:41.080 user 0m0.065s 00:07:41.080 sys 0m0.108s 00:07:41.080 ************************************ 00:07:41.080 END TEST nvme_single_aen 00:07:41.080 ************************************ 00:07:41.080 23:28:29 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.080 23:28:29 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:41.080 23:28:29 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:41.080 23:28:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.080 23:28:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.080 23:28:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.080 ************************************ 00:07:41.080 START TEST nvme_doorbell_aers 00:07:41.080 ************************************ 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:41.080 23:28:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:41.081 23:28:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:41.345 [2024-09-28 23:28:29.357344] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:07:51.328 Executing: test_write_invalid_db 00:07:51.328 Waiting for AER completion... 00:07:51.328 Failure: test_write_invalid_db 00:07:51.328 00:07:51.328 Executing: test_invalid_db_write_overflow_sq 00:07:51.328 Waiting for AER completion... 00:07:51.328 Failure: test_invalid_db_write_overflow_sq 00:07:51.328 00:07:51.328 Executing: test_invalid_db_write_overflow_cq 00:07:51.328 Waiting for AER completion... 00:07:51.328 Failure: test_invalid_db_write_overflow_cq 00:07:51.328 00:07:51.328 23:28:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:51.328 23:28:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:51.328 [2024-09-28 23:28:39.428712] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:01.312 Executing: test_write_invalid_db 00:08:01.312 Waiting for AER completion... 00:08:01.312 Failure: test_write_invalid_db 00:08:01.312 00:08:01.312 Executing: test_invalid_db_write_overflow_sq 00:08:01.312 Waiting for AER completion... 00:08:01.312 Failure: test_invalid_db_write_overflow_sq 00:08:01.312 00:08:01.312 Executing: test_invalid_db_write_overflow_cq 00:08:01.312 Waiting for AER completion... 00:08:01.312 Failure: test_invalid_db_write_overflow_cq 00:08:01.312 00:08:01.312 23:28:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:01.312 23:28:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:01.312 [2024-09-28 23:28:49.450895] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:11.290 Executing: test_write_invalid_db 00:08:11.290 Waiting for AER completion... 00:08:11.290 Failure: test_write_invalid_db 00:08:11.290 00:08:11.290 Executing: test_invalid_db_write_overflow_sq 00:08:11.290 Waiting for AER completion... 00:08:11.290 Failure: test_invalid_db_write_overflow_sq 00:08:11.290 00:08:11.290 Executing: test_invalid_db_write_overflow_cq 00:08:11.290 Waiting for AER completion... 00:08:11.290 Failure: test_invalid_db_write_overflow_cq 00:08:11.290 00:08:11.290 23:28:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:11.290 23:28:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:11.597 [2024-09-28 23:28:59.506008] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 Executing: test_write_invalid_db 00:08:21.606 Waiting for AER completion... 00:08:21.606 Failure: test_write_invalid_db 00:08:21.606 00:08:21.606 Executing: test_invalid_db_write_overflow_sq 00:08:21.606 Waiting for AER completion... 00:08:21.606 Failure: test_invalid_db_write_overflow_sq 00:08:21.606 00:08:21.606 Executing: test_invalid_db_write_overflow_cq 00:08:21.606 Waiting for AER completion... 00:08:21.606 Failure: test_invalid_db_write_overflow_cq 00:08:21.606 00:08:21.606 00:08:21.606 real 0m40.208s 00:08:21.606 user 0m34.106s 00:08:21.606 sys 0m5.713s 00:08:21.606 23:29:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.606 23:29:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:21.606 ************************************ 00:08:21.606 END TEST nvme_doorbell_aers 00:08:21.606 ************************************ 00:08:21.606 23:29:09 nvme -- nvme/nvme.sh@97 -- # uname 00:08:21.606 23:29:09 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:21.606 23:29:09 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:21.606 23:29:09 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:21.606 23:29:09 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.606 23:29:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.606 ************************************ 00:08:21.606 START TEST nvme_multi_aen 00:08:21.606 ************************************ 00:08:21.606 23:29:09 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:21.606 [2024-09-28 23:29:09.547385] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.547442] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.547451] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.549266] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.549305] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.549314] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.550476] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.550504] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.550521] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.551674] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.551701] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 [2024-09-28 23:29:09.551708] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63712) is not found. Dropping the request. 00:08:21.606 Child process pid: 64238 00:08:21.606 [Child] Asynchronous Event Request test 00:08:21.606 [Child] Attached to 0000:00:10.0 00:08:21.606 [Child] Attached to 0000:00:11.0 00:08:21.607 [Child] Attached to 0000:00:13.0 00:08:21.607 [Child] Attached to 0000:00:12.0 00:08:21.607 [Child] Registering asynchronous event callbacks... 00:08:21.607 [Child] Getting orig temperature thresholds of all controllers 00:08:21.607 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.607 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.607 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.607 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.607 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:21.607 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.607 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.607 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.607 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.607 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.607 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.607 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.607 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.607 [Child] Cleaning up... 00:08:21.864 Asynchronous Event Request test 00:08:21.864 Attached to 0000:00:10.0 00:08:21.864 Attached to 0000:00:11.0 00:08:21.864 Attached to 0000:00:13.0 00:08:21.864 Attached to 0000:00:12.0 00:08:21.864 Reset controller to setup AER completions for this process 00:08:21.864 Registering asynchronous event callbacks... 00:08:21.864 Getting orig temperature thresholds of all controllers 00:08:21.864 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.864 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.864 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.864 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.864 Setting all controllers temperature threshold low to trigger AER 00:08:21.864 Waiting for all controllers temperature threshold to be set lower 00:08:21.864 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.864 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:21.864 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.864 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:21.864 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.864 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:21.864 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.864 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:21.864 Waiting for all controllers to trigger AER and reset threshold 00:08:21.864 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.864 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.864 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.864 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.864 Cleaning up... 00:08:21.864 00:08:21.864 real 0m0.419s 00:08:21.864 user 0m0.116s 00:08:21.864 sys 0m0.196s 00:08:21.864 23:29:09 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.864 ************************************ 00:08:21.864 END TEST nvme_multi_aen 00:08:21.864 ************************************ 00:08:21.864 23:29:09 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:21.864 23:29:09 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:21.864 23:29:09 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:21.864 23:29:09 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.864 23:29:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.865 ************************************ 00:08:21.865 START TEST nvme_startup 00:08:21.865 ************************************ 00:08:21.865 23:29:09 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:21.865 Initializing NVMe Controllers 00:08:21.865 Attached to 0000:00:10.0 00:08:21.865 Attached to 0000:00:11.0 00:08:21.865 Attached to 0000:00:13.0 00:08:21.865 Attached to 0000:00:12.0 00:08:21.865 Initialization complete. 00:08:21.865 Time used:132336.750 (us). 00:08:21.865 00:08:21.865 real 0m0.197s 00:08:21.865 user 0m0.058s 00:08:21.865 sys 0m0.097s 00:08:21.865 23:29:10 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.865 23:29:10 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:21.865 ************************************ 00:08:21.865 END TEST nvme_startup 00:08:21.865 ************************************ 00:08:22.122 23:29:10 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:22.122 23:29:10 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.122 23:29:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.122 23:29:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.122 ************************************ 00:08:22.122 START TEST nvme_multi_secondary 00:08:22.122 ************************************ 00:08:22.122 23:29:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:22.122 23:29:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64283 00:08:22.122 23:29:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:22.122 23:29:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64284 00:08:22.122 23:29:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:22.122 23:29:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:25.406 Initializing NVMe Controllers 00:08:25.406 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:25.406 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:25.406 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.406 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:25.406 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:25.406 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:25.406 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:25.406 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:25.406 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:25.406 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:25.406 Initialization complete. Launching workers. 00:08:25.406 ======================================================== 00:08:25.406 Latency(us) 00:08:25.406 Device Information : IOPS MiB/s Average min max 00:08:25.406 PCIE (0000:00:10.0) NSID 1 from core 1: 7631.74 29.81 2095.21 785.02 6383.53 00:08:25.406 PCIE (0000:00:11.0) NSID 1 from core 1: 7631.74 29.81 2096.19 786.99 6215.61 00:08:25.406 PCIE (0000:00:13.0) NSID 1 from core 1: 7631.74 29.81 2096.16 796.84 5579.92 00:08:25.406 PCIE (0000:00:12.0) NSID 1 from core 1: 7631.74 29.81 2096.13 785.12 5414.23 00:08:25.406 PCIE (0000:00:12.0) NSID 2 from core 1: 7631.74 29.81 2096.11 800.14 5552.28 00:08:25.406 PCIE (0000:00:12.0) NSID 3 from core 1: 7631.74 29.81 2096.15 787.47 6259.47 00:08:25.406 ======================================================== 00:08:25.406 Total : 45790.47 178.87 2095.99 785.02 6383.53 00:08:25.406 00:08:25.406 Initializing NVMe Controllers 00:08:25.406 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:25.406 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:25.406 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.406 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:25.406 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:25.406 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:25.406 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:25.406 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:25.406 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:25.406 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:25.406 Initialization complete. Launching workers. 00:08:25.406 ======================================================== 00:08:25.406 Latency(us) 00:08:25.407 Device Information : IOPS MiB/s Average min max 00:08:25.407 PCIE (0000:00:10.0) NSID 1 from core 2: 2996.90 11.71 5337.20 1263.10 13579.81 00:08:25.407 PCIE (0000:00:11.0) NSID 1 from core 2: 2996.90 11.71 5338.31 1376.16 14075.46 00:08:25.407 PCIE (0000:00:13.0) NSID 1 from core 2: 2996.90 11.71 5338.26 1333.42 13061.71 00:08:25.407 PCIE (0000:00:12.0) NSID 1 from core 2: 2996.90 11.71 5337.78 1333.98 13675.61 00:08:25.407 PCIE (0000:00:12.0) NSID 2 from core 2: 2996.90 11.71 5337.70 1080.07 13813.71 00:08:25.407 PCIE (0000:00:12.0) NSID 3 from core 2: 2996.90 11.71 5337.40 1044.05 13760.63 00:08:25.407 ======================================================== 00:08:25.407 Total : 17981.38 70.24 5337.78 1044.05 14075.46 00:08:25.407 00:08:25.407 23:29:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64283 00:08:27.377 Initializing NVMe Controllers 00:08:27.377 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:27.377 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:27.377 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:27.377 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:27.377 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:27.377 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:27.377 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:27.377 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:27.377 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:27.377 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:27.377 Initialization complete. Launching workers. 00:08:27.377 ======================================================== 00:08:27.377 Latency(us) 00:08:27.377 Device Information : IOPS MiB/s Average min max 00:08:27.377 PCIE (0000:00:10.0) NSID 1 from core 0: 10152.23 39.66 1574.83 772.39 7552.95 00:08:27.377 PCIE (0000:00:11.0) NSID 1 from core 0: 10152.23 39.66 1575.62 796.20 8459.50 00:08:27.377 PCIE (0000:00:13.0) NSID 1 from core 0: 10152.23 39.66 1575.60 723.20 7814.66 00:08:27.377 PCIE (0000:00:12.0) NSID 1 from core 0: 10152.23 39.66 1575.57 700.34 7143.39 00:08:27.377 PCIE (0000:00:12.0) NSID 2 from core 0: 10152.23 39.66 1575.54 699.18 7060.39 00:08:27.377 PCIE (0000:00:12.0) NSID 3 from core 0: 10152.23 39.66 1575.52 675.75 7235.87 00:08:27.377 ======================================================== 00:08:27.377 Total : 60913.39 237.94 1575.45 675.75 8459.50 00:08:27.377 00:08:27.377 23:29:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64284 00:08:27.377 23:29:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64363 00:08:27.377 23:29:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:27.377 23:29:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64364 00:08:27.377 23:29:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:27.377 23:29:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:30.663 Initializing NVMe Controllers 00:08:30.663 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.663 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.663 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.663 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.663 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:30.663 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:30.663 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:30.663 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:30.663 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:30.663 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:30.663 Initialization complete. Launching workers. 00:08:30.663 ======================================================== 00:08:30.663 Latency(us) 00:08:30.663 Device Information : IOPS MiB/s Average min max 00:08:30.663 PCIE (0000:00:10.0) NSID 1 from core 0: 7192.87 28.10 2223.09 796.83 7915.65 00:08:30.663 PCIE (0000:00:11.0) NSID 1 from core 0: 7192.87 28.10 2224.11 805.63 7544.94 00:08:30.663 PCIE (0000:00:13.0) NSID 1 from core 0: 7192.87 28.10 2224.07 794.00 6450.34 00:08:30.663 PCIE (0000:00:12.0) NSID 1 from core 0: 7192.87 28.10 2224.08 790.27 7697.11 00:08:30.663 PCIE (0000:00:12.0) NSID 2 from core 0: 7192.87 28.10 2224.08 805.58 7031.61 00:08:30.663 PCIE (0000:00:12.0) NSID 3 from core 0: 7192.87 28.10 2224.07 811.16 6981.08 00:08:30.663 ======================================================== 00:08:30.663 Total : 43157.23 168.58 2223.92 790.27 7915.65 00:08:30.663 00:08:30.663 Initializing NVMe Controllers 00:08:30.663 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.663 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.663 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.663 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.663 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:30.663 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:30.663 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:30.663 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:30.663 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:30.663 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:30.663 Initialization complete. Launching workers. 00:08:30.663 ======================================================== 00:08:30.663 Latency(us) 00:08:30.663 Device Information : IOPS MiB/s Average min max 00:08:30.663 PCIE (0000:00:10.0) NSID 1 from core 1: 7145.34 27.91 2237.85 760.06 6185.85 00:08:30.663 PCIE (0000:00:11.0) NSID 1 from core 1: 7145.34 27.91 2238.74 786.26 6066.50 00:08:30.663 PCIE (0000:00:13.0) NSID 1 from core 1: 7145.34 27.91 2238.68 711.14 6202.92 00:08:30.663 PCIE (0000:00:12.0) NSID 1 from core 1: 7145.34 27.91 2238.63 689.07 6112.40 00:08:30.663 PCIE (0000:00:12.0) NSID 2 from core 1: 7145.34 27.91 2238.59 658.06 5717.12 00:08:30.663 PCIE (0000:00:12.0) NSID 3 from core 1: 7145.34 27.91 2238.54 650.27 6447.93 00:08:30.663 ======================================================== 00:08:30.663 Total : 42872.05 167.47 2238.51 650.27 6447.93 00:08:30.663 00:08:33.194 Initializing NVMe Controllers 00:08:33.194 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.194 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.194 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.194 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.194 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:33.194 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:33.194 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:33.194 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:33.194 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:33.194 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:33.194 Initialization complete. Launching workers. 00:08:33.194 ======================================================== 00:08:33.194 Latency(us) 00:08:33.194 Device Information : IOPS MiB/s Average min max 00:08:33.194 PCIE (0000:00:10.0) NSID 1 from core 2: 4155.69 16.23 3848.75 843.36 15864.53 00:08:33.194 PCIE (0000:00:11.0) NSID 1 from core 2: 4155.69 16.23 3849.54 784.06 15759.73 00:08:33.194 PCIE (0000:00:13.0) NSID 1 from core 2: 4155.69 16.23 3849.48 852.04 20479.30 00:08:33.194 PCIE (0000:00:12.0) NSID 1 from core 2: 4155.69 16.23 3849.03 844.38 19705.57 00:08:33.194 PCIE (0000:00:12.0) NSID 2 from core 2: 4155.69 16.23 3849.35 862.32 15715.94 00:08:33.194 PCIE (0000:00:12.0) NSID 3 from core 2: 4155.69 16.23 3849.66 846.81 16332.55 00:08:33.194 ======================================================== 00:08:33.194 Total : 24934.17 97.40 3849.30 784.06 20479.30 00:08:33.194 00:08:33.194 23:29:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64363 00:08:33.194 23:29:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64364 00:08:33.194 00:08:33.194 real 0m10.853s 00:08:33.194 user 0m18.366s 00:08:33.194 sys 0m0.640s 00:08:33.194 23:29:20 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.194 ************************************ 00:08:33.194 23:29:20 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:33.194 END TEST nvme_multi_secondary 00:08:33.194 ************************************ 00:08:33.194 23:29:20 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:33.194 23:29:20 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:33.194 23:29:20 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/63321 ]] 00:08:33.194 23:29:20 nvme -- common/autotest_common.sh@1090 -- # kill 63321 00:08:33.194 23:29:20 nvme -- common/autotest_common.sh@1091 -- # wait 63321 00:08:33.194 [2024-09-28 23:29:20.955502] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.955597] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.955627] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.955645] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.958072] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.958126] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.958143] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.958161] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.959930] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.959966] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.959978] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.959990] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.961653] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.961693] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.961705] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 [2024-09-28 23:29:20.961717] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64237) is not found. Dropping the request. 00:08:33.194 23:29:21 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:33.194 23:29:21 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:33.194 23:29:21 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:33.194 23:29:21 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.194 23:29:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.194 23:29:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.194 ************************************ 00:08:33.194 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:33.194 ************************************ 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:33.194 * Looking for test storage... 00:08:33.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:33.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.194 --rc genhtml_branch_coverage=1 00:08:33.194 --rc genhtml_function_coverage=1 00:08:33.194 --rc genhtml_legend=1 00:08:33.194 --rc geninfo_all_blocks=1 00:08:33.194 --rc geninfo_unexecuted_blocks=1 00:08:33.194 00:08:33.194 ' 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:33.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.194 --rc genhtml_branch_coverage=1 00:08:33.194 --rc genhtml_function_coverage=1 00:08:33.194 --rc genhtml_legend=1 00:08:33.194 --rc geninfo_all_blocks=1 00:08:33.194 --rc geninfo_unexecuted_blocks=1 00:08:33.194 00:08:33.194 ' 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:33.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.194 --rc genhtml_branch_coverage=1 00:08:33.194 --rc genhtml_function_coverage=1 00:08:33.194 --rc genhtml_legend=1 00:08:33.194 --rc geninfo_all_blocks=1 00:08:33.194 --rc geninfo_unexecuted_blocks=1 00:08:33.194 00:08:33.194 ' 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:33.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.194 --rc genhtml_branch_coverage=1 00:08:33.194 --rc genhtml_function_coverage=1 00:08:33.194 --rc genhtml_legend=1 00:08:33.194 --rc geninfo_all_blocks=1 00:08:33.194 --rc geninfo_unexecuted_blocks=1 00:08:33.194 00:08:33.194 ' 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:33.194 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64521 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64521 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64521 ']' 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.195 23:29:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:33.454 [2024-09-28 23:29:21.429263] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:33.454 [2024-09-28 23:29:21.429387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64521 ] 00:08:33.454 [2024-09-28 23:29:21.593889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.712 [2024-09-28 23:29:21.785897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.712 [2024-09-28 23:29:21.786141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.712 [2024-09-28 23:29:21.786474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.713 [2024-09-28 23:29:21.786496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.280 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.280 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:34.280 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:34.280 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.280 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:34.539 nvme0n1 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_iyWEI.txt 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:34.539 true 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727566162 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64544 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:34.539 23:29:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:36.444 [2024-09-28 23:29:24.482287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:36.444 [2024-09-28 23:29:24.482793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:36.444 [2024-09-28 23:29:24.482835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:36.444 [2024-09-28 23:29:24.482847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.444 [2024-09-28 23:29:24.484632] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:36.444 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64544 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64544 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64544 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_iyWEI.txt 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_iyWEI.txt 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64521 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64521 ']' 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64521 00:08:36.444 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:36.445 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.445 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64521 00:08:36.445 killing process with pid 64521 00:08:36.445 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.445 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.445 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64521' 00:08:36.445 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64521 00:08:36.445 23:29:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64521 00:08:37.820 23:29:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:37.820 23:29:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:37.820 ************************************ 00:08:37.820 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:37.820 ************************************ 00:08:37.820 00:08:37.820 real 0m4.726s 00:08:37.820 user 0m16.299s 00:08:37.820 sys 0m0.535s 00:08:37.820 23:29:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.820 23:29:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:37.820 23:29:25 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:37.820 23:29:25 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:37.820 23:29:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.820 23:29:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.820 23:29:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.820 ************************************ 00:08:37.820 START TEST nvme_fio 00:08:37.820 ************************************ 00:08:37.820 23:29:25 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:37.820 23:29:25 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:37.820 23:29:25 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:37.820 23:29:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:37.820 23:29:25 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:37.820 23:29:25 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:37.820 23:29:25 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:37.820 23:29:25 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.820 23:29:25 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:38.077 23:29:25 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:38.077 23:29:25 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:38.077 23:29:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:38.077 23:29:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:38.077 23:29:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:38.077 23:29:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:38.077 23:29:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:38.077 23:29:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:38.077 23:29:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:38.337 23:29:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:38.337 23:29:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:38.337 23:29:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:38.596 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:38.596 fio-3.35 00:08:38.596 Starting 1 thread 00:08:43.899 00:08:43.899 test: (groupid=0, jobs=1): err= 0: pid=64685: Sat Sep 28 23:29:31 2024 00:08:43.899 read: IOPS=23.7k, BW=92.7MiB/s (97.2MB/s)(185MiB/2001msec) 00:08:43.899 slat (nsec): min=3347, max=72565, avg=4877.13, stdev=2094.35 00:08:43.899 clat (usec): min=239, max=8376, avg=2690.38, stdev=776.92 00:08:43.899 lat (usec): min=243, max=8387, avg=2695.26, stdev=778.31 00:08:43.899 clat percentiles (usec): 00:08:43.899 | 1.00th=[ 1926], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:43.899 | 30.00th=[ 2442], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:08:43.899 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 2999], 95.00th=[ 4490], 00:08:43.899 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 8160], 99.95th=[ 8291], 00:08:43.899 | 99.99th=[ 8356] 00:08:43.899 bw ( KiB/s): min=90088, max=97360, per=97.51%, avg=92525.33, stdev=4186.99, samples=3 00:08:43.899 iops : min=22522, max=24340, avg=23131.33, stdev=1046.75, samples=3 00:08:43.899 write: IOPS=23.6k, BW=92.1MiB/s (96.6MB/s)(184MiB/2001msec); 0 zone resets 00:08:43.899 slat (nsec): min=3464, max=56130, avg=5243.52, stdev=2118.03 00:08:43.899 clat (usec): min=213, max=8386, avg=2699.42, stdev=791.39 00:08:43.899 lat (usec): min=218, max=8398, avg=2704.66, stdev=792.77 00:08:43.899 clat percentiles (usec): 00:08:43.899 | 1.00th=[ 1926], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:43.899 | 30.00th=[ 2442], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:08:43.899 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 3032], 95.00th=[ 4555], 00:08:43.899 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 8160], 99.95th=[ 8291], 00:08:43.899 | 99.99th=[ 8356] 00:08:43.899 bw ( KiB/s): min=89464, max=96600, per=98.22%, avg=92618.67, stdev=3639.11, samples=3 00:08:43.899 iops : min=22366, max=24150, avg=23154.67, stdev=909.78, samples=3 00:08:43.899 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:08:43.899 lat (msec) : 2=1.23%, 4=92.55%, 10=6.17% 00:08:43.899 cpu : usr=99.30%, sys=0.00%, ctx=6, majf=0, minf=607 00:08:43.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:43.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:43.899 issued rwts: total=47466,47171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:43.899 00:08:43.899 Run status group 0 (all jobs): 00:08:43.899 READ: bw=92.7MiB/s (97.2MB/s), 92.7MiB/s-92.7MiB/s (97.2MB/s-97.2MB/s), io=185MiB (194MB), run=2001-2001msec 00:08:43.899 WRITE: bw=92.1MiB/s (96.6MB/s), 92.1MiB/s-92.1MiB/s (96.6MB/s-96.6MB/s), io=184MiB (193MB), run=2001-2001msec 00:08:44.158 ----------------------------------------------------- 00:08:44.158 Suppressions used: 00:08:44.158 count bytes template 00:08:44.158 1 32 /usr/src/fio/parse.c 00:08:44.158 1 8 libtcmalloc_minimal.so 00:08:44.158 ----------------------------------------------------- 00:08:44.158 00:08:44.158 23:29:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:44.158 23:29:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:44.158 23:29:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:44.158 23:29:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:44.158 23:29:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:44.158 23:29:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:44.416 23:29:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:44.417 23:29:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:44.417 23:29:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:44.675 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:44.675 fio-3.35 00:08:44.675 Starting 1 thread 00:08:51.233 00:08:51.233 test: (groupid=0, jobs=1): err= 0: pid=64740: Sat Sep 28 23:29:38 2024 00:08:51.233 read: IOPS=21.1k, BW=82.3MiB/s (86.3MB/s)(165MiB/2001msec) 00:08:51.233 slat (nsec): min=4792, max=68260, avg=5960.30, stdev=2616.79 00:08:51.233 clat (usec): min=675, max=9820, avg=3032.77, stdev=977.72 00:08:51.233 lat (usec): min=689, max=9872, avg=3038.73, stdev=979.38 00:08:51.233 clat percentiles (usec): 00:08:51.233 | 1.00th=[ 2409], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2573], 00:08:51.233 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2638], 60.00th=[ 2671], 00:08:51.233 | 70.00th=[ 2737], 80.00th=[ 2966], 90.00th=[ 4293], 95.00th=[ 5800], 00:08:51.233 | 99.00th=[ 6521], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 7963], 00:08:51.233 | 99.99th=[ 9634] 00:08:51.233 bw ( KiB/s): min=77944, max=87048, per=97.82%, avg=82456.00, stdev=4552.53, samples=3 00:08:51.233 iops : min=19486, max=21762, avg=20614.00, stdev=1138.13, samples=3 00:08:51.233 write: IOPS=20.9k, BW=81.8MiB/s (85.8MB/s)(164MiB/2001msec); 0 zone resets 00:08:51.233 slat (nsec): min=4884, max=96065, avg=6360.48, stdev=2724.12 00:08:51.233 clat (usec): min=720, max=9649, avg=3033.71, stdev=974.59 00:08:51.233 lat (usec): min=734, max=9665, avg=3040.07, stdev=976.26 00:08:51.233 clat percentiles (usec): 00:08:51.233 | 1.00th=[ 2409], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2573], 00:08:51.233 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2638], 60.00th=[ 2671], 00:08:51.233 | 70.00th=[ 2737], 80.00th=[ 2966], 90.00th=[ 4293], 95.00th=[ 5800], 00:08:51.233 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 7767], 99.95th=[ 8029], 00:08:51.233 | 99.99th=[ 9241] 00:08:51.233 bw ( KiB/s): min=77784, max=86928, per=98.47%, avg=82504.00, stdev=4579.18, samples=3 00:08:51.233 iops : min=19446, max=21734, avg=20626.67, stdev=1145.76, samples=3 00:08:51.233 lat (usec) : 750=0.01%, 1000=0.01% 00:08:51.233 lat (msec) : 2=0.08%, 4=88.64%, 10=11.27% 00:08:51.233 cpu : usr=99.30%, sys=0.00%, ctx=10, majf=0, minf=607 00:08:51.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:51.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:51.233 issued rwts: total=42168,41915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:51.233 00:08:51.233 Run status group 0 (all jobs): 00:08:51.233 READ: bw=82.3MiB/s (86.3MB/s), 82.3MiB/s-82.3MiB/s (86.3MB/s-86.3MB/s), io=165MiB (173MB), run=2001-2001msec 00:08:51.233 WRITE: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=164MiB (172MB), run=2001-2001msec 00:08:51.233 ----------------------------------------------------- 00:08:51.233 Suppressions used: 00:08:51.233 count bytes template 00:08:51.233 1 32 /usr/src/fio/parse.c 00:08:51.233 1 8 libtcmalloc_minimal.so 00:08:51.233 ----------------------------------------------------- 00:08:51.233 00:08:51.233 23:29:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:51.233 23:29:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:51.233 23:29:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:51.233 23:29:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:51.233 23:29:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:51.233 23:29:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:51.233 23:29:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:51.233 23:29:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:51.233 23:29:38 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.233 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.233 fio-3.35 00:08:51.233 Starting 1 thread 00:08:57.953 00:08:57.953 test: (groupid=0, jobs=1): err= 0: pid=64801: Sat Sep 28 23:29:44 2024 00:08:57.953 read: IOPS=20.4k, BW=79.7MiB/s (83.6MB/s)(160MiB/2001msec) 00:08:57.953 slat (nsec): min=3967, max=60723, avg=5913.71, stdev=2306.66 00:08:57.953 clat (usec): min=200, max=11226, avg=3127.61, stdev=952.24 00:08:57.953 lat (usec): min=205, max=11267, avg=3133.52, stdev=953.38 00:08:57.953 clat percentiles (usec): 00:08:57.953 | 1.00th=[ 2245], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2540], 00:08:57.953 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2868], 00:08:57.953 | 70.00th=[ 3064], 80.00th=[ 3458], 90.00th=[ 4490], 95.00th=[ 5342], 00:08:57.953 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7767], 99.95th=[ 9372], 00:08:57.953 | 99.99th=[11207] 00:08:57.953 bw ( KiB/s): min=80416, max=81656, per=99.42%, avg=81157.33, stdev=654.65, samples=3 00:08:57.953 iops : min=20104, max=20414, avg=20289.33, stdev=163.66, samples=3 00:08:57.953 write: IOPS=20.4k, BW=79.5MiB/s (83.4MB/s)(159MiB/2001msec); 0 zone resets 00:08:57.953 slat (nsec): min=4042, max=60051, avg=6262.37, stdev=2379.66 00:08:57.953 clat (usec): min=233, max=11177, avg=3129.57, stdev=949.47 00:08:57.953 lat (usec): min=238, max=11186, avg=3135.83, stdev=950.60 00:08:57.953 clat percentiles (usec): 00:08:57.953 | 1.00th=[ 2278], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2540], 00:08:57.953 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2900], 00:08:57.953 | 70.00th=[ 3064], 80.00th=[ 3458], 90.00th=[ 4490], 95.00th=[ 5342], 00:08:57.953 | 99.00th=[ 6652], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 9765], 00:08:57.953 | 99.99th=[11076] 00:08:57.953 bw ( KiB/s): min=80768, max=81968, per=99.88%, avg=81338.67, stdev=602.15, samples=3 00:08:57.953 iops : min=20192, max=20492, avg=20334.67, stdev=150.54, samples=3 00:08:57.953 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.01% 00:08:57.953 lat (msec) : 2=0.06%, 4=86.27%, 10=13.58%, 20=0.04% 00:08:57.953 cpu : usr=99.15%, sys=0.00%, ctx=5, majf=0, minf=607 00:08:57.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:57.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.953 issued rwts: total=40837,40738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.953 00:08:57.953 Run status group 0 (all jobs): 00:08:57.953 READ: bw=79.7MiB/s (83.6MB/s), 79.7MiB/s-79.7MiB/s (83.6MB/s-83.6MB/s), io=160MiB (167MB), run=2001-2001msec 00:08:57.953 WRITE: bw=79.5MiB/s (83.4MB/s), 79.5MiB/s-79.5MiB/s (83.4MB/s-83.4MB/s), io=159MiB (167MB), run=2001-2001msec 00:08:57.953 ----------------------------------------------------- 00:08:57.954 Suppressions used: 00:08:57.954 count bytes template 00:08:57.954 1 32 /usr/src/fio/parse.c 00:08:57.954 1 8 libtcmalloc_minimal.so 00:08:57.954 ----------------------------------------------------- 00:08:57.954 00:08:57.954 23:29:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:57.954 23:29:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:57.954 23:29:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:57.954 23:29:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:57.954 23:29:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:57.954 23:29:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.954 23:29:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.954 23:29:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.954 23:29:45 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:57.954 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.954 fio-3.35 00:08:57.954 Starting 1 thread 00:09:06.088 00:09:06.088 test: (groupid=0, jobs=1): err= 0: pid=64862: Sat Sep 28 23:29:53 2024 00:09:06.088 read: IOPS=20.6k, BW=80.5MiB/s (84.4MB/s)(161MiB/2001msec) 00:09:06.088 slat (nsec): min=4790, max=67364, avg=5843.31, stdev=2286.88 00:09:06.088 clat (usec): min=219, max=11352, avg=3093.09, stdev=902.90 00:09:06.088 lat (usec): min=225, max=11419, avg=3098.93, stdev=904.06 00:09:06.088 clat percentiles (usec): 00:09:06.088 | 1.00th=[ 2212], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2540], 00:09:06.088 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2769], 60.00th=[ 2868], 00:09:06.088 | 70.00th=[ 2999], 80.00th=[ 3359], 90.00th=[ 4359], 95.00th=[ 5211], 00:09:06.088 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 8717], 00:09:06.088 | 99.99th=[10945] 00:09:06.088 bw ( KiB/s): min=81416, max=84680, per=100.00%, avg=82645.33, stdev=1774.78, samples=3 00:09:06.088 iops : min=20354, max=21170, avg=20661.33, stdev=443.70, samples=3 00:09:06.088 write: IOPS=20.5k, BW=80.2MiB/s (84.1MB/s)(161MiB/2001msec); 0 zone resets 00:09:06.088 slat (nsec): min=4863, max=63431, avg=6238.94, stdev=2370.48 00:09:06.088 clat (usec): min=321, max=10952, avg=3102.87, stdev=908.04 00:09:06.088 lat (usec): min=327, max=10968, avg=3109.11, stdev=909.22 00:09:06.088 clat percentiles (usec): 00:09:06.088 | 1.00th=[ 2245], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2540], 00:09:06.088 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2868], 00:09:06.088 | 70.00th=[ 3032], 80.00th=[ 3359], 90.00th=[ 4424], 95.00th=[ 5276], 00:09:06.088 | 99.00th=[ 6456], 99.50th=[ 6718], 99.90th=[ 7570], 99.95th=[ 8848], 00:09:06.088 | 99.99th=[10814] 00:09:06.088 bw ( KiB/s): min=81528, max=85024, per=100.00%, avg=82701.33, stdev=2011.52, samples=3 00:09:06.088 iops : min=20382, max=21256, avg=20675.33, stdev=502.88, samples=3 00:09:06.088 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:06.088 lat (msec) : 2=0.09%, 4=86.96%, 10=12.88%, 20=0.03% 00:09:06.088 cpu : usr=99.20%, sys=0.00%, ctx=4, majf=0, minf=605 00:09:06.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:06.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.088 issued rwts: total=41222,41090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.088 00:09:06.088 Run status group 0 (all jobs): 00:09:06.088 READ: bw=80.5MiB/s (84.4MB/s), 80.5MiB/s-80.5MiB/s (84.4MB/s-84.4MB/s), io=161MiB (169MB), run=2001-2001msec 00:09:06.088 WRITE: bw=80.2MiB/s (84.1MB/s), 80.2MiB/s-80.2MiB/s (84.1MB/s-84.1MB/s), io=161MiB (168MB), run=2001-2001msec 00:09:06.088 ----------------------------------------------------- 00:09:06.088 Suppressions used: 00:09:06.088 count bytes template 00:09:06.088 1 32 /usr/src/fio/parse.c 00:09:06.088 1 8 libtcmalloc_minimal.so 00:09:06.088 ----------------------------------------------------- 00:09:06.088 00:09:06.088 ************************************ 00:09:06.088 END TEST nvme_fio 00:09:06.088 ************************************ 00:09:06.088 23:29:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:06.088 23:29:54 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:06.088 00:09:06.088 real 0m28.102s 00:09:06.088 user 0m16.412s 00:09:06.088 sys 0m21.731s 00:09:06.088 23:29:54 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.088 23:29:54 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:06.088 ************************************ 00:09:06.088 END TEST nvme 00:09:06.088 ************************************ 00:09:06.088 00:09:06.088 real 1m37.571s 00:09:06.088 user 3m36.645s 00:09:06.088 sys 0m32.453s 00:09:06.088 23:29:54 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.088 23:29:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.088 23:29:54 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:06.088 23:29:54 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:06.088 23:29:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.088 23:29:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.088 23:29:54 -- common/autotest_common.sh@10 -- # set +x 00:09:06.088 ************************************ 00:09:06.088 START TEST nvme_scc 00:09:06.088 ************************************ 00:09:06.088 23:29:54 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:06.088 * Looking for test storage... 00:09:06.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:06.088 23:29:54 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:06.088 23:29:54 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:06.088 23:29:54 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:06.088 23:29:54 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.088 23:29:54 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:06.088 23:29:54 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.088 23:29:54 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:06.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.088 --rc genhtml_branch_coverage=1 00:09:06.088 --rc genhtml_function_coverage=1 00:09:06.088 --rc genhtml_legend=1 00:09:06.088 --rc geninfo_all_blocks=1 00:09:06.088 --rc geninfo_unexecuted_blocks=1 00:09:06.088 00:09:06.088 ' 00:09:06.089 23:29:54 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:06.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.089 --rc genhtml_branch_coverage=1 00:09:06.089 --rc genhtml_function_coverage=1 00:09:06.089 --rc genhtml_legend=1 00:09:06.089 --rc geninfo_all_blocks=1 00:09:06.089 --rc geninfo_unexecuted_blocks=1 00:09:06.089 00:09:06.089 ' 00:09:06.089 23:29:54 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:06.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.089 --rc genhtml_branch_coverage=1 00:09:06.089 --rc genhtml_function_coverage=1 00:09:06.089 --rc genhtml_legend=1 00:09:06.089 --rc geninfo_all_blocks=1 00:09:06.089 --rc geninfo_unexecuted_blocks=1 00:09:06.089 00:09:06.089 ' 00:09:06.089 23:29:54 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:06.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.089 --rc genhtml_branch_coverage=1 00:09:06.089 --rc genhtml_function_coverage=1 00:09:06.089 --rc genhtml_legend=1 00:09:06.089 --rc geninfo_all_blocks=1 00:09:06.089 --rc geninfo_unexecuted_blocks=1 00:09:06.089 00:09:06.089 ' 00:09:06.089 23:29:54 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:06.089 23:29:54 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:06.347 23:29:54 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:06.347 23:29:54 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:06.347 23:29:54 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.347 23:29:54 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.347 23:29:54 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.347 23:29:54 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.347 23:29:54 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.347 23:29:54 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.347 23:29:54 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.347 23:29:54 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.347 23:29:54 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:06.348 23:29:54 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.348 23:29:54 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:06.348 23:29:54 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:06.348 23:29:54 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:06.348 23:29:54 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:06.348 23:29:54 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:06.348 23:29:54 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:06.348 23:29:54 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:06.348 23:29:54 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:06.348 23:29:54 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:06.348 23:29:54 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.348 23:29:54 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:06.348 23:29:54 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:06.348 23:29:54 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:06.348 23:29:54 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:06.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:06.606 Waiting for block devices as requested 00:09:06.606 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.865 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.865 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.865 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.153 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:12.153 23:29:59 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:12.153 23:29:59 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:12.153 23:29:59 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:12.153 23:29:59 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:12.153 23:29:59 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:12.153 23:29:59 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:12.153 23:29:59 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:12.153 23:29:59 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:12.153 23:29:59 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:12.153 23:29:59 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:12.153 23:29:59 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:12.153 23:29:59 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:12.153 23:29:59 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:12.153 23:29:59 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:12.154 23:29:59 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:12.154 23:29:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:29:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:29:59 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.154 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.155 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.156 23:30:00 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:12.157 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:12.158 23:30:00 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:12.158 23:30:00 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:12.158 23:30:00 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:12.158 23:30:00 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:12.158 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.159 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:12.160 23:30:00 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:12.160 23:30:00 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:12.160 23:30:00 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:12.160 23:30:00 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.160 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:12.161 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.162 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.163 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:12.164 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:12.165 23:30:00 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:12.165 23:30:00 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:12.165 23:30:00 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:12.165 23:30:00 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:12.165 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:12.166 23:30:00 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:12.166 23:30:00 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:12.167 23:30:00 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:12.426 23:30:00 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:12.426 23:30:00 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:12.426 23:30:00 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:12.426 23:30:00 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:12.426 23:30:00 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:12.426 23:30:00 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:12.426 23:30:00 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:12.426 23:30:00 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:12.426 23:30:00 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:12.426 23:30:00 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:12.426 23:30:00 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:12.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:13.257 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.257 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.257 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.257 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.257 23:30:01 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:13.257 23:30:01 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:13.257 23:30:01 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.258 23:30:01 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:13.258 ************************************ 00:09:13.258 START TEST nvme_simple_copy 00:09:13.258 ************************************ 00:09:13.258 23:30:01 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:13.516 Initializing NVMe Controllers 00:09:13.516 Attaching to 0000:00:10.0 00:09:13.516 Controller supports SCC. Attached to 0000:00:10.0 00:09:13.516 Namespace ID: 1 size: 6GB 00:09:13.517 Initialization complete. 00:09:13.517 00:09:13.517 Controller QEMU NVMe Ctrl (12340 ) 00:09:13.517 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:13.517 Namespace Block Size:4096 00:09:13.517 Writing LBAs 0 to 63 with Random Data 00:09:13.517 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:13.517 LBAs matching Written Data: 64 00:09:13.517 00:09:13.517 real 0m0.255s 00:09:13.517 user 0m0.089s 00:09:13.517 sys 0m0.064s 00:09:13.517 23:30:01 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.517 23:30:01 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:13.517 ************************************ 00:09:13.517 END TEST nvme_simple_copy 00:09:13.517 ************************************ 00:09:13.517 00:09:13.517 real 0m7.539s 00:09:13.517 user 0m0.973s 00:09:13.517 sys 0m1.419s 00:09:13.517 23:30:01 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.517 23:30:01 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:13.517 ************************************ 00:09:13.517 END TEST nvme_scc 00:09:13.517 ************************************ 00:09:13.517 23:30:01 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:13.517 23:30:01 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:13.517 23:30:01 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:13.517 23:30:01 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:13.517 23:30:01 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:13.517 23:30:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:13.517 23:30:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.517 23:30:01 -- common/autotest_common.sh@10 -- # set +x 00:09:13.517 ************************************ 00:09:13.517 START TEST nvme_fdp 00:09:13.517 ************************************ 00:09:13.517 23:30:01 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:13.775 * Looking for test storage... 00:09:13.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:13.775 23:30:01 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:13.775 23:30:01 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:13.775 23:30:01 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:13.775 23:30:01 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:13.775 23:30:01 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.775 23:30:01 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:13.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.775 --rc genhtml_branch_coverage=1 00:09:13.775 --rc genhtml_function_coverage=1 00:09:13.775 --rc genhtml_legend=1 00:09:13.775 --rc geninfo_all_blocks=1 00:09:13.775 --rc geninfo_unexecuted_blocks=1 00:09:13.775 00:09:13.775 ' 00:09:13.775 23:30:01 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:13.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.775 --rc genhtml_branch_coverage=1 00:09:13.775 --rc genhtml_function_coverage=1 00:09:13.775 --rc genhtml_legend=1 00:09:13.775 --rc geninfo_all_blocks=1 00:09:13.775 --rc geninfo_unexecuted_blocks=1 00:09:13.775 00:09:13.775 ' 00:09:13.775 23:30:01 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:13.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.775 --rc genhtml_branch_coverage=1 00:09:13.775 --rc genhtml_function_coverage=1 00:09:13.775 --rc genhtml_legend=1 00:09:13.775 --rc geninfo_all_blocks=1 00:09:13.775 --rc geninfo_unexecuted_blocks=1 00:09:13.775 00:09:13.775 ' 00:09:13.775 23:30:01 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:13.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.775 --rc genhtml_branch_coverage=1 00:09:13.775 --rc genhtml_function_coverage=1 00:09:13.775 --rc genhtml_legend=1 00:09:13.775 --rc geninfo_all_blocks=1 00:09:13.775 --rc geninfo_unexecuted_blocks=1 00:09:13.775 00:09:13.775 ' 00:09:13.775 23:30:01 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.775 23:30:01 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.775 23:30:01 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.775 23:30:01 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.775 23:30:01 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.775 23:30:01 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:13.775 23:30:01 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:13.775 23:30:01 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:13.775 23:30:01 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.775 23:30:01 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:14.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:14.310 Waiting for block devices as requested 00:09:14.310 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.310 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.310 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.578 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.846 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:19.846 23:30:07 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:19.846 23:30:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:19.846 23:30:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:19.846 23:30:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.846 23:30:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.846 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.847 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:19.848 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:19.849 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.850 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:19.851 23:30:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:19.851 23:30:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:19.851 23:30:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.851 23:30:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.851 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.852 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.853 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.854 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.855 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:19.856 23:30:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:19.856 23:30:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:19.856 23:30:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.856 23:30:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.856 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.857 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:19.858 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:19.859 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:19.860 23:30:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.861 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.862 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:19.863 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:19.864 23:30:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:19.864 23:30:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:19.864 23:30:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:19.864 23:30:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:19.864 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:19.865 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.866 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:19.867 23:30:07 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:19.867 23:30:07 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:19.867 23:30:07 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:19.867 23:30:07 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:19.867 23:30:07 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:20.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.690 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.690 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.690 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.690 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:20.690 23:30:08 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:20.690 23:30:08 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:20.690 23:30:08 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.690 23:30:08 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:20.690 ************************************ 00:09:20.690 START TEST nvme_flexible_data_placement 00:09:20.690 ************************************ 00:09:20.690 23:30:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:20.949 Initializing NVMe Controllers 00:09:20.949 Attaching to 0000:00:13.0 00:09:20.949 Controller supports FDP Attached to 0000:00:13.0 00:09:20.949 Namespace ID: 1 Endurance Group ID: 1 00:09:20.949 Initialization complete. 00:09:20.949 00:09:20.949 ================================== 00:09:20.949 == FDP tests for Namespace: #01 == 00:09:20.949 ================================== 00:09:20.949 00:09:20.949 Get Feature: FDP: 00:09:20.949 ================= 00:09:20.949 Enabled: Yes 00:09:20.949 FDP configuration Index: 0 00:09:20.949 00:09:20.949 FDP configurations log page 00:09:20.949 =========================== 00:09:20.949 Number of FDP configurations: 1 00:09:20.949 Version: 0 00:09:20.949 Size: 112 00:09:20.949 FDP Configuration Descriptor: 0 00:09:20.949 Descriptor Size: 96 00:09:20.949 Reclaim Group Identifier format: 2 00:09:20.949 FDP Volatile Write Cache: Not Present 00:09:20.949 FDP Configuration: Valid 00:09:20.949 Vendor Specific Size: 0 00:09:20.949 Number of Reclaim Groups: 2 00:09:20.949 Number of Recalim Unit Handles: 8 00:09:20.949 Max Placement Identifiers: 128 00:09:20.949 Number of Namespaces Suppprted: 256 00:09:20.949 Reclaim unit Nominal Size: 6000000 bytes 00:09:20.949 Estimated Reclaim Unit Time Limit: Not Reported 00:09:20.949 RUH Desc #000: RUH Type: Initially Isolated 00:09:20.949 RUH Desc #001: RUH Type: Initially Isolated 00:09:20.949 RUH Desc #002: RUH Type: Initially Isolated 00:09:20.949 RUH Desc #003: RUH Type: Initially Isolated 00:09:20.949 RUH Desc #004: RUH Type: Initially Isolated 00:09:20.949 RUH Desc #005: RUH Type: Initially Isolated 00:09:20.949 RUH Desc #006: RUH Type: Initially Isolated 00:09:20.949 RUH Desc #007: RUH Type: Initially Isolated 00:09:20.949 00:09:20.949 FDP reclaim unit handle usage log page 00:09:20.949 ====================================== 00:09:20.949 Number of Reclaim Unit Handles: 8 00:09:20.949 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:20.949 RUH Usage Desc #001: RUH Attributes: Unused 00:09:20.949 RUH Usage Desc #002: RUH Attributes: Unused 00:09:20.949 RUH Usage Desc #003: RUH Attributes: Unused 00:09:20.949 RUH Usage Desc #004: RUH Attributes: Unused 00:09:20.949 RUH Usage Desc #005: RUH Attributes: Unused 00:09:20.949 RUH Usage Desc #006: RUH Attributes: Unused 00:09:20.949 RUH Usage Desc #007: RUH Attributes: Unused 00:09:20.949 00:09:20.949 FDP statistics log page 00:09:20.949 ======================= 00:09:20.949 Host bytes with metadata written: 1196347392 00:09:20.949 Media bytes with metadata written: 1196609536 00:09:20.949 Media bytes erased: 0 00:09:20.949 00:09:20.949 FDP Reclaim unit handle status 00:09:20.949 ============================== 00:09:20.949 Number of RUHS descriptors: 2 00:09:20.949 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000b13 00:09:20.949 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:20.949 00:09:20.949 FDP write on placement id: 0 success 00:09:20.949 00:09:20.949 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:20.949 00:09:20.949 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:20.949 00:09:20.949 Get Feature: FDP Events for Placement handle: #0 00:09:20.949 ======================== 00:09:20.949 Number of FDP Events: 6 00:09:20.949 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:20.949 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:20.949 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:20.949 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:20.949 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:20.949 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:20.949 00:09:20.949 FDP events log page 00:09:20.949 =================== 00:09:20.949 Number of FDP events: 1 00:09:20.949 FDP Event #0: 00:09:20.949 Event Type: RU Not Written to Capacity 00:09:20.949 Placement Identifier: Valid 00:09:20.949 NSID: Valid 00:09:20.949 Location: Valid 00:09:20.949 Placement Identifier: 0 00:09:20.949 Event Timestamp: 6 00:09:20.949 Namespace Identifier: 1 00:09:20.949 Reclaim Group Identifier: 0 00:09:20.949 Reclaim Unit Handle Identifier: 0 00:09:20.949 00:09:20.949 FDP test passed 00:09:20.949 00:09:20.949 real 0m0.225s 00:09:20.949 user 0m0.070s 00:09:20.949 sys 0m0.054s 00:09:20.949 23:30:09 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.949 ************************************ 00:09:20.949 END TEST nvme_flexible_data_placement 00:09:20.949 ************************************ 00:09:20.949 23:30:09 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:20.949 00:09:20.949 real 0m7.363s 00:09:20.949 user 0m0.956s 00:09:20.949 sys 0m1.362s 00:09:20.949 23:30:09 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.949 ************************************ 00:09:20.949 END TEST nvme_fdp 00:09:20.949 ************************************ 00:09:20.949 23:30:09 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:20.949 23:30:09 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:20.949 23:30:09 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:20.949 23:30:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.949 23:30:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.949 23:30:09 -- common/autotest_common.sh@10 -- # set +x 00:09:20.950 ************************************ 00:09:20.950 START TEST nvme_rpc 00:09:20.950 ************************************ 00:09:20.950 23:30:09 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:21.209 * Looking for test storage... 00:09:21.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.209 23:30:09 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:21.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.209 --rc genhtml_branch_coverage=1 00:09:21.209 --rc genhtml_function_coverage=1 00:09:21.209 --rc genhtml_legend=1 00:09:21.209 --rc geninfo_all_blocks=1 00:09:21.209 --rc geninfo_unexecuted_blocks=1 00:09:21.209 00:09:21.209 ' 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:21.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.209 --rc genhtml_branch_coverage=1 00:09:21.209 --rc genhtml_function_coverage=1 00:09:21.209 --rc genhtml_legend=1 00:09:21.209 --rc geninfo_all_blocks=1 00:09:21.209 --rc geninfo_unexecuted_blocks=1 00:09:21.209 00:09:21.209 ' 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:21.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.209 --rc genhtml_branch_coverage=1 00:09:21.209 --rc genhtml_function_coverage=1 00:09:21.209 --rc genhtml_legend=1 00:09:21.209 --rc geninfo_all_blocks=1 00:09:21.209 --rc geninfo_unexecuted_blocks=1 00:09:21.209 00:09:21.209 ' 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:21.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.209 --rc genhtml_branch_coverage=1 00:09:21.209 --rc genhtml_function_coverage=1 00:09:21.209 --rc genhtml_legend=1 00:09:21.209 --rc geninfo_all_blocks=1 00:09:21.209 --rc geninfo_unexecuted_blocks=1 00:09:21.209 00:09:21.209 ' 00:09:21.209 23:30:09 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.209 23:30:09 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:21.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.209 23:30:09 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:21.209 23:30:09 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66225 00:09:21.209 23:30:09 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:21.209 23:30:09 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:21.209 23:30:09 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66225 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 66225 ']' 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.209 23:30:09 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.468 [2024-09-28 23:30:09.380900] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:21.468 [2024-09-28 23:30:09.381062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66225 ] 00:09:21.468 [2024-09-28 23:30:09.536063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.727 [2024-09-28 23:30:09.721878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.727 [2024-09-28 23:30:09.722026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.293 23:30:10 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.293 23:30:10 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:22.293 23:30:10 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:22.551 Nvme0n1 00:09:22.551 23:30:10 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:22.551 23:30:10 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:22.810 request: 00:09:22.810 { 00:09:22.810 "bdev_name": "Nvme0n1", 00:09:22.810 "filename": "non_existing_file", 00:09:22.810 "method": "bdev_nvme_apply_firmware", 00:09:22.810 "req_id": 1 00:09:22.810 } 00:09:22.810 Got JSON-RPC error response 00:09:22.810 response: 00:09:22.810 { 00:09:22.810 "code": -32603, 00:09:22.810 "message": "open file failed." 00:09:22.810 } 00:09:22.810 23:30:10 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:22.810 23:30:10 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:22.810 23:30:10 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:22.810 23:30:10 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:22.810 23:30:10 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66225 00:09:22.810 23:30:10 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 66225 ']' 00:09:22.810 23:30:10 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 66225 00:09:23.068 23:30:10 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:23.068 23:30:10 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.068 23:30:10 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66225 00:09:23.068 killing process with pid 66225 00:09:23.068 23:30:11 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.068 23:30:11 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.068 23:30:11 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66225' 00:09:23.068 23:30:11 nvme_rpc -- common/autotest_common.sh@969 -- # kill 66225 00:09:23.068 23:30:11 nvme_rpc -- common/autotest_common.sh@974 -- # wait 66225 00:09:24.441 ************************************ 00:09:24.441 END TEST nvme_rpc 00:09:24.441 ************************************ 00:09:24.441 00:09:24.441 real 0m3.436s 00:09:24.441 user 0m6.373s 00:09:24.441 sys 0m0.522s 00:09:24.441 23:30:12 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.441 23:30:12 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.441 23:30:12 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:24.441 23:30:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.441 23:30:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.442 23:30:12 -- common/autotest_common.sh@10 -- # set +x 00:09:24.442 ************************************ 00:09:24.442 START TEST nvme_rpc_timeouts 00:09:24.442 ************************************ 00:09:24.442 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:24.700 * Looking for test storage... 00:09:24.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.700 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.700 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.700 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.700 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:24.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.700 23:30:12 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:24.700 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.700 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.700 --rc genhtml_branch_coverage=1 00:09:24.700 --rc genhtml_function_coverage=1 00:09:24.700 --rc genhtml_legend=1 00:09:24.700 --rc geninfo_all_blocks=1 00:09:24.700 --rc geninfo_unexecuted_blocks=1 00:09:24.700 00:09:24.700 ' 00:09:24.700 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.700 --rc genhtml_branch_coverage=1 00:09:24.700 --rc genhtml_function_coverage=1 00:09:24.700 --rc genhtml_legend=1 00:09:24.700 --rc geninfo_all_blocks=1 00:09:24.700 --rc geninfo_unexecuted_blocks=1 00:09:24.700 00:09:24.700 ' 00:09:24.700 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.701 --rc genhtml_branch_coverage=1 00:09:24.701 --rc genhtml_function_coverage=1 00:09:24.701 --rc genhtml_legend=1 00:09:24.701 --rc geninfo_all_blocks=1 00:09:24.701 --rc geninfo_unexecuted_blocks=1 00:09:24.701 00:09:24.701 ' 00:09:24.701 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.701 --rc genhtml_branch_coverage=1 00:09:24.701 --rc genhtml_function_coverage=1 00:09:24.701 --rc genhtml_legend=1 00:09:24.701 --rc geninfo_all_blocks=1 00:09:24.701 --rc geninfo_unexecuted_blocks=1 00:09:24.701 00:09:24.701 ' 00:09:24.701 23:30:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.701 23:30:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66290 00:09:24.701 23:30:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66290 00:09:24.701 23:30:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66322 00:09:24.701 23:30:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:24.701 23:30:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66322 00:09:24.701 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 66322 ']' 00:09:24.701 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.701 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.701 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.701 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.701 23:30:12 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:24.701 23:30:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:24.701 [2024-09-28 23:30:12.790659] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:24.701 [2024-09-28 23:30:12.790937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66322 ] 00:09:24.959 [2024-09-28 23:30:12.940866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:24.959 [2024-09-28 23:30:13.123232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.959 [2024-09-28 23:30:13.123304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.891 23:30:13 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.891 23:30:13 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:25.891 Checking default timeout settings: 00:09:25.891 23:30:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:25.891 23:30:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:25.891 Making settings changes with rpc: 00:09:25.891 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:25.891 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:26.151 Check default vs. modified settings: 00:09:26.151 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:26.151 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:26.410 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:26.410 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:26.410 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66290 00:09:26.410 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:26.410 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66290 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:26.668 Setting action_on_timeout is changed as expected. 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66290 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66290 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:26.668 Setting timeout_us is changed as expected. 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66290 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66290 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:26.668 Setting timeout_admin_us is changed as expected. 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:26.668 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:26.669 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:26.669 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66290 /tmp/settings_modified_66290 00:09:26.669 23:30:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66322 00:09:26.669 23:30:14 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 66322 ']' 00:09:26.669 23:30:14 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 66322 00:09:26.669 23:30:14 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:26.669 23:30:14 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.669 23:30:14 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66322 00:09:26.669 killing process with pid 66322 00:09:26.669 23:30:14 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.669 23:30:14 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.669 23:30:14 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66322' 00:09:26.669 23:30:14 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 66322 00:09:26.669 23:30:14 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 66322 00:09:28.042 RPC TIMEOUT SETTING TEST PASSED. 00:09:28.042 23:30:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:28.042 ************************************ 00:09:28.042 END TEST nvme_rpc_timeouts 00:09:28.042 ************************************ 00:09:28.042 00:09:28.042 real 0m3.458s 00:09:28.042 user 0m6.599s 00:09:28.042 sys 0m0.489s 00:09:28.042 23:30:16 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.042 23:30:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:28.042 23:30:16 -- spdk/autotest.sh@239 -- # uname -s 00:09:28.042 23:30:16 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:28.042 23:30:16 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:28.042 23:30:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:28.042 23:30:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.042 23:30:16 -- common/autotest_common.sh@10 -- # set +x 00:09:28.042 ************************************ 00:09:28.042 START TEST sw_hotplug 00:09:28.042 ************************************ 00:09:28.042 23:30:16 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:28.042 * Looking for test storage... 00:09:28.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.042 23:30:16 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:28.042 23:30:16 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:28.042 23:30:16 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:28.300 23:30:16 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.300 23:30:16 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:28.300 23:30:16 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.300 23:30:16 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:28.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.300 --rc genhtml_branch_coverage=1 00:09:28.300 --rc genhtml_function_coverage=1 00:09:28.300 --rc genhtml_legend=1 00:09:28.300 --rc geninfo_all_blocks=1 00:09:28.300 --rc geninfo_unexecuted_blocks=1 00:09:28.300 00:09:28.300 ' 00:09:28.300 23:30:16 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:28.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.300 --rc genhtml_branch_coverage=1 00:09:28.300 --rc genhtml_function_coverage=1 00:09:28.300 --rc genhtml_legend=1 00:09:28.300 --rc geninfo_all_blocks=1 00:09:28.300 --rc geninfo_unexecuted_blocks=1 00:09:28.300 00:09:28.300 ' 00:09:28.300 23:30:16 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:28.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.300 --rc genhtml_branch_coverage=1 00:09:28.300 --rc genhtml_function_coverage=1 00:09:28.300 --rc genhtml_legend=1 00:09:28.300 --rc geninfo_all_blocks=1 00:09:28.300 --rc geninfo_unexecuted_blocks=1 00:09:28.300 00:09:28.300 ' 00:09:28.300 23:30:16 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:28.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.300 --rc genhtml_branch_coverage=1 00:09:28.300 --rc genhtml_function_coverage=1 00:09:28.300 --rc genhtml_legend=1 00:09:28.300 --rc geninfo_all_blocks=1 00:09:28.300 --rc geninfo_unexecuted_blocks=1 00:09:28.300 00:09:28.300 ' 00:09:28.300 23:30:16 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.558 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.558 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:28.558 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:28.558 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:28.558 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:28.558 23:30:16 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:28.558 23:30:16 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:28.558 23:30:16 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:28.558 23:30:16 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:28.558 23:30:16 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:28.817 23:30:16 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.817 23:30:16 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:28.817 23:30:16 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:28.817 23:30:16 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:29.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.076 Waiting for block devices as requested 00:09:29.334 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.334 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.334 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.334 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.604 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:34.604 23:30:22 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:34.604 23:30:22 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:34.861 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:34.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.861 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:35.119 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:35.378 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.378 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.378 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:35.378 23:30:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67185 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:35.637 23:30:23 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:35.637 23:30:23 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:35.637 23:30:23 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:35.637 23:30:23 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:35.637 23:30:23 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:35.637 23:30:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:35.637 Initializing NVMe Controllers 00:09:35.637 Attaching to 0000:00:10.0 00:09:35.637 Attaching to 0000:00:11.0 00:09:35.637 Attached to 0000:00:10.0 00:09:35.637 Attached to 0000:00:11.0 00:09:35.637 Initialization complete. Starting I/O... 00:09:35.637 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:35.637 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:35.637 00:09:37.009 QEMU NVMe Ctrl (12340 ): 2664 I/Os completed (+2664) 00:09:37.009 QEMU NVMe Ctrl (12341 ): 2664 I/Os completed (+2664) 00:09:37.009 00:09:37.942 QEMU NVMe Ctrl (12340 ): 5889 I/Os completed (+3225) 00:09:37.942 QEMU NVMe Ctrl (12341 ): 5840 I/Os completed (+3176) 00:09:37.942 00:09:38.877 QEMU NVMe Ctrl (12340 ): 9263 I/Os completed (+3374) 00:09:38.877 QEMU NVMe Ctrl (12341 ): 9205 I/Os completed (+3365) 00:09:38.877 00:09:39.810 QEMU NVMe Ctrl (12340 ): 13049 I/Os completed (+3786) 00:09:39.810 QEMU NVMe Ctrl (12341 ): 12976 I/Os completed (+3771) 00:09:39.810 00:09:40.745 QEMU NVMe Ctrl (12340 ): 16822 I/Os completed (+3773) 00:09:40.745 QEMU NVMe Ctrl (12341 ): 16740 I/Os completed (+3764) 00:09:40.745 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:41.686 [2024-09-28 23:30:29.576426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:41.686 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:41.686 [2024-09-28 23:30:29.577379] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.577424] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.577439] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.577454] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:41.686 [2024-09-28 23:30:29.579016] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.579060] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.579071] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.579083] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:41.686 [2024-09-28 23:30:29.598574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:41.686 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:41.686 [2024-09-28 23:30:29.599527] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.599586] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.599617] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.599643] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:41.686 [2024-09-28 23:30:29.601004] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.601100] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.601117] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 [2024-09-28 23:30:29.601127] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:41.686 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:41.686 EAL: Scan for (pci) bus failed. 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:41.686 Attaching to 0000:00:10.0 00:09:41.686 Attached to 0000:00:10.0 00:09:41.686 QEMU NVMe Ctrl (12340 ): 40 I/Os completed (+40) 00:09:41.686 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:41.686 23:30:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:41.686 Attaching to 0000:00:11.0 00:09:41.686 Attached to 0000:00:11.0 00:09:42.624 QEMU NVMe Ctrl (12340 ): 3398 I/Os completed (+3358) 00:09:42.624 QEMU NVMe Ctrl (12341 ): 3187 I/Os completed (+3187) 00:09:42.624 00:09:43.998 QEMU NVMe Ctrl (12340 ): 6587 I/Os completed (+3189) 00:09:43.998 QEMU NVMe Ctrl (12341 ): 6532 I/Os completed (+3345) 00:09:43.998 00:09:44.931 QEMU NVMe Ctrl (12340 ): 9913 I/Os completed (+3326) 00:09:44.931 QEMU NVMe Ctrl (12341 ): 9869 I/Os completed (+3337) 00:09:44.931 00:09:45.866 QEMU NVMe Ctrl (12340 ): 13120 I/Os completed (+3207) 00:09:45.866 QEMU NVMe Ctrl (12341 ): 13123 I/Os completed (+3254) 00:09:45.866 00:09:46.804 QEMU NVMe Ctrl (12340 ): 15966 I/Os completed (+2846) 00:09:46.804 QEMU NVMe Ctrl (12341 ): 15973 I/Os completed (+2850) 00:09:46.804 00:09:47.747 QEMU NVMe Ctrl (12340 ): 18610 I/Os completed (+2644) 00:09:47.747 QEMU NVMe Ctrl (12341 ): 18623 I/Os completed (+2650) 00:09:47.747 00:09:48.689 QEMU NVMe Ctrl (12340 ): 21246 I/Os completed (+2636) 00:09:48.689 QEMU NVMe Ctrl (12341 ): 21259 I/Os completed (+2636) 00:09:48.689 00:09:49.672 QEMU NVMe Ctrl (12340 ): 23818 I/Os completed (+2572) 00:09:49.672 QEMU NVMe Ctrl (12341 ): 23834 I/Os completed (+2575) 00:09:49.672 00:09:50.605 QEMU NVMe Ctrl (12340 ): 27022 I/Os completed (+3204) 00:09:50.605 QEMU NVMe Ctrl (12341 ): 27046 I/Os completed (+3212) 00:09:50.605 00:09:51.978 QEMU NVMe Ctrl (12340 ): 30202 I/Os completed (+3180) 00:09:51.978 QEMU NVMe Ctrl (12341 ): 30229 I/Os completed (+3183) 00:09:51.978 00:09:52.910 QEMU NVMe Ctrl (12340 ): 33379 I/Os completed (+3177) 00:09:52.910 QEMU NVMe Ctrl (12341 ): 33495 I/Os completed (+3266) 00:09:52.910 00:09:53.843 QEMU NVMe Ctrl (12340 ): 37094 I/Os completed (+3715) 00:09:53.843 QEMU NVMe Ctrl (12341 ): 37192 I/Os completed (+3697) 00:09:53.843 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:53.843 [2024-09-28 23:30:41.829649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:53.843 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:53.843 [2024-09-28 23:30:41.830682] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.830792] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.830858] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.830888] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:53.843 [2024-09-28 23:30:41.832473] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.832633] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.832696] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.832722] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:53.843 [2024-09-28 23:30:41.845319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:53.843 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:53.843 [2024-09-28 23:30:41.846226] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.846286] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.846316] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.846400] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:53.843 [2024-09-28 23:30:41.847848] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.847931] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.847958] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 [2024-09-28 23:30:41.848003] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:53.843 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:53.843 EAL: Scan for (pci) bus failed. 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:53.843 23:30:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:53.843 23:30:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:53.843 23:30:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:53.843 23:30:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:53.843 23:30:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:53.843 Attaching to 0000:00:10.0 00:09:53.843 Attached to 0000:00:10.0 00:09:54.101 23:30:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:54.101 23:30:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:54.101 23:30:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:54.101 Attaching to 0000:00:11.0 00:09:54.101 Attached to 0000:00:11.0 00:09:54.665 QEMU NVMe Ctrl (12340 ): 2868 I/Os completed (+2868) 00:09:54.665 QEMU NVMe Ctrl (12341 ): 2578 I/Os completed (+2578) 00:09:54.665 00:09:56.037 QEMU NVMe Ctrl (12340 ): 6552 I/Os completed (+3684) 00:09:56.037 QEMU NVMe Ctrl (12341 ): 6251 I/Os completed (+3673) 00:09:56.037 00:09:56.603 QEMU NVMe Ctrl (12340 ): 10295 I/Os completed (+3743) 00:09:56.603 QEMU NVMe Ctrl (12341 ): 9980 I/Os completed (+3729) 00:09:56.603 00:09:57.977 QEMU NVMe Ctrl (12340 ): 14000 I/Os completed (+3705) 00:09:57.977 QEMU NVMe Ctrl (12341 ): 13674 I/Os completed (+3694) 00:09:57.977 00:09:58.912 QEMU NVMe Ctrl (12340 ): 17753 I/Os completed (+3753) 00:09:58.912 QEMU NVMe Ctrl (12341 ): 17466 I/Os completed (+3792) 00:09:58.912 00:09:59.846 QEMU NVMe Ctrl (12340 ): 21046 I/Os completed (+3293) 00:09:59.846 QEMU NVMe Ctrl (12341 ): 20708 I/Os completed (+3242) 00:09:59.846 00:10:00.781 QEMU NVMe Ctrl (12340 ): 24626 I/Os completed (+3580) 00:10:00.781 QEMU NVMe Ctrl (12341 ): 24233 I/Os completed (+3525) 00:10:00.781 00:10:01.742 QEMU NVMe Ctrl (12340 ): 28006 I/Os completed (+3380) 00:10:01.742 QEMU NVMe Ctrl (12341 ): 27621 I/Os completed (+3388) 00:10:01.742 00:10:02.676 QEMU NVMe Ctrl (12340 ): 31673 I/Os completed (+3667) 00:10:02.676 QEMU NVMe Ctrl (12341 ): 31279 I/Os completed (+3658) 00:10:02.676 00:10:03.610 QEMU NVMe Ctrl (12340 ): 34882 I/Os completed (+3209) 00:10:03.610 QEMU NVMe Ctrl (12341 ): 34495 I/Os completed (+3216) 00:10:03.610 00:10:04.984 QEMU NVMe Ctrl (12340 ): 38542 I/Os completed (+3660) 00:10:04.985 QEMU NVMe Ctrl (12341 ): 38152 I/Os completed (+3657) 00:10:04.985 00:10:05.918 QEMU NVMe Ctrl (12340 ): 42219 I/Os completed (+3677) 00:10:05.918 QEMU NVMe Ctrl (12341 ): 41801 I/Os completed (+3649) 00:10:05.918 00:10:05.918 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:05.918 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:05.918 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:05.918 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:05.918 [2024-09-28 23:30:54.080243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:05.918 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:05.918 [2024-09-28 23:30:54.081362] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.918 [2024-09-28 23:30:54.081411] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.918 [2024-09-28 23:30:54.081427] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.918 [2024-09-28 23:30:54.081444] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.918 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:05.918 [2024-09-28 23:30:54.083223] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.918 [2024-09-28 23:30:54.083264] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.918 [2024-09-28 23:30:54.083276] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.918 [2024-09-28 23:30:54.083288] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:06.176 [2024-09-28 23:30:54.101089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:06.176 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:06.176 [2024-09-28 23:30:54.101977] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.176 [2024-09-28 23:30:54.102362] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.176 [2024-09-28 23:30:54.102384] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.176 [2024-09-28 23:30:54.102398] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.176 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:06.176 [2024-09-28 23:30:54.103734] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.176 [2024-09-28 23:30:54.103762] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.176 [2024-09-28 23:30:54.103775] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.176 [2024-09-28 23:30:54.103787] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:06.176 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:06.176 EAL: Scan for (pci) bus failed. 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:06.176 Attaching to 0000:00:10.0 00:10:06.176 Attached to 0000:00:10.0 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:06.176 23:30:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:06.176 Attaching to 0000:00:11.0 00:10:06.176 Attached to 0000:00:11.0 00:10:06.176 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:06.176 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:06.434 [2024-09-28 23:30:54.345220] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:18.660 23:31:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:18.660 23:31:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:18.660 23:31:06 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.76 00:10:18.660 23:31:06 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.76 00:10:18.660 23:31:06 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:18.660 23:31:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.76 00:10:18.660 23:31:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.76 2 00:10:18.660 remove_attach_helper took 42.76s to complete (handling 2 nvme drive(s)) 23:31:06 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:25.241 23:31:12 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67185 00:10:25.241 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67185) - No such process 00:10:25.241 23:31:12 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67185 00:10:25.241 23:31:12 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:25.241 23:31:12 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:25.241 23:31:12 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:25.241 23:31:12 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67723 00:10:25.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.241 23:31:12 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:25.241 23:31:12 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67723 00:10:25.241 23:31:12 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:25.241 23:31:12 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67723 ']' 00:10:25.241 23:31:12 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.241 23:31:12 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.241 23:31:12 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.241 23:31:12 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.241 23:31:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.241 [2024-09-28 23:31:12.440106] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:25.241 [2024-09-28 23:31:12.440258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67723 ] 00:10:25.241 [2024-09-28 23:31:12.592896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.241 [2024-09-28 23:31:12.812831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.500 23:31:13 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.500 23:31:13 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:25.500 23:31:13 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:25.500 23:31:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.500 23:31:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.500 23:31:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.500 23:31:13 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:25.500 23:31:13 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:25.500 23:31:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:25.500 23:31:13 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:25.500 23:31:13 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:25.500 23:31:13 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:25.500 23:31:13 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:25.500 23:31:13 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:25.500 23:31:13 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:25.500 23:31:13 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:25.500 23:31:13 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:25.500 23:31:13 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:25.500 23:31:13 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:32.070 23:31:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.070 23:31:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:32.070 23:31:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:32.070 23:31:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:32.070 [2024-09-28 23:31:19.549845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:32.070 [2024-09-28 23:31:19.551068] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.070 [2024-09-28 23:31:19.551105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.070 [2024-09-28 23:31:19.551116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.070 [2024-09-28 23:31:19.551134] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.070 [2024-09-28 23:31:19.551141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.070 [2024-09-28 23:31:19.551150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.070 [2024-09-28 23:31:19.551157] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.070 [2024-09-28 23:31:19.551165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.070 [2024-09-28 23:31:19.551171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.070 [2024-09-28 23:31:19.551182] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.070 [2024-09-28 23:31:19.551189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.070 [2024-09-28 23:31:19.551197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.070 [2024-09-28 23:31:19.949864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:32.070 [2024-09-28 23:31:19.951052] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.070 [2024-09-28 23:31:19.951083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.070 [2024-09-28 23:31:19.951093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.070 [2024-09-28 23:31:19.951106] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.070 [2024-09-28 23:31:19.951115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.070 [2024-09-28 23:31:19.951122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.070 [2024-09-28 23:31:19.951131] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.070 [2024-09-28 23:31:19.951137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.070 [2024-09-28 23:31:19.951145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.070 [2024-09-28 23:31:19.951152] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.070 [2024-09-28 23:31:19.951160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:32.070 [2024-09-28 23:31:19.951166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:32.070 23:31:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.070 23:31:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 23:31:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:32.070 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:32.329 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:32.329 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:32.329 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:32.329 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:32.329 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:32.329 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:32.329 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:32.329 23:31:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.542 23:31:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.542 23:31:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.542 23:31:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.542 23:31:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.542 23:31:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.542 23:31:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:44.542 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:44.542 [2024-09-28 23:31:32.450053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:44.542 [2024-09-28 23:31:32.451337] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.542 [2024-09-28 23:31:32.451440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.542 [2024-09-28 23:31:32.451502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.542 [2024-09-28 23:31:32.451781] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.542 [2024-09-28 23:31:32.451815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.542 [2024-09-28 23:31:32.451890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.542 [2024-09-28 23:31:32.451918] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.542 [2024-09-28 23:31:32.451968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.542 [2024-09-28 23:31:32.452022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.542 [2024-09-28 23:31:32.452051] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.542 [2024-09-28 23:31:32.452165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.542 [2024-09-28 23:31:32.452252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.801 [2024-09-28 23:31:32.850060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:44.801 [2024-09-28 23:31:32.851304] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.801 [2024-09-28 23:31:32.851402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.801 [2024-09-28 23:31:32.851464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.801 [2024-09-28 23:31:32.851493] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.801 [2024-09-28 23:31:32.851556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.801 [2024-09-28 23:31:32.851584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.801 [2024-09-28 23:31:32.851632] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.801 [2024-09-28 23:31:32.851651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.801 [2024-09-28 23:31:32.851705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.801 [2024-09-28 23:31:32.851729] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.801 [2024-09-28 23:31:32.851770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:44.801 [2024-09-28 23:31:32.851817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:44.801 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:44.801 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:44.801 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:44.801 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:44.801 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:44.801 23:31:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.801 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:44.801 23:31:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.801 23:31:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.059 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:45.059 23:31:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:45.059 23:31:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.279 23:31:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.279 23:31:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.279 23:31:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.279 [2024-09-28 23:31:45.250278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:57.279 [2024-09-28 23:31:45.251713] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.279 [2024-09-28 23:31:45.251831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.279 [2024-09-28 23:31:45.251869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.279 [2024-09-28 23:31:45.251905] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.279 [2024-09-28 23:31:45.251923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.279 [2024-09-28 23:31:45.251954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.279 [2024-09-28 23:31:45.251978] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.279 [2024-09-28 23:31:45.251996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.279 [2024-09-28 23:31:45.252019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.279 [2024-09-28 23:31:45.252043] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.279 [2024-09-28 23:31:45.252059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.279 [2024-09-28 23:31:45.252083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.279 23:31:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.279 23:31:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.279 23:31:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:57.279 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:57.856 [2024-09-28 23:31:45.750279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:57.856 [2024-09-28 23:31:45.751435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.856 [2024-09-28 23:31:45.751466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.856 [2024-09-28 23:31:45.751478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.856 [2024-09-28 23:31:45.751492] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.856 [2024-09-28 23:31:45.751500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.856 [2024-09-28 23:31:45.751520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.856 [2024-09-28 23:31:45.751530] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.856 [2024-09-28 23:31:45.751537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.856 [2024-09-28 23:31:45.751547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.856 [2024-09-28 23:31:45.751554] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.856 [2024-09-28 23:31:45.751562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.856 [2024-09-28 23:31:45.751568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:57.856 23:31:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.856 23:31:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:57.856 23:31:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.856 23:31:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:57.856 23:31:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:58.116 23:31:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.116 23:31:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.116 23:31:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.116 23:31:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:58.116 23:31:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:58.116 23:31:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.116 23:31:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:10.350 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:10.350 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:10.350 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:10.350 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.350 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.350 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.350 23:31:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.350 23:31:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.350 23:31:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.350 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:10.350 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.350 23:31:58 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.69 00:11:10.350 23:31:58 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.69 00:11:10.350 23:31:58 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.69 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.69 2 00:11:10.351 remove_attach_helper took 44.69s to complete (handling 2 nvme drive(s)) 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:10.351 23:31:58 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:10.351 23:31:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.940 23:32:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.940 23:32:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.940 23:32:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:16.940 [2024-09-28 23:32:04.270034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:16.940 [2024-09-28 23:32:04.271028] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.940 [2024-09-28 23:32:04.271061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.940 [2024-09-28 23:32:04.271072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.940 [2024-09-28 23:32:04.271089] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.940 [2024-09-28 23:32:04.271097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.940 [2024-09-28 23:32:04.271105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.940 [2024-09-28 23:32:04.271112] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.940 [2024-09-28 23:32:04.271120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.940 [2024-09-28 23:32:04.271127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.940 [2024-09-28 23:32:04.271138] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.940 [2024-09-28 23:32:04.271145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.940 [2024-09-28 23:32:04.271154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.940 [2024-09-28 23:32:04.670032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:16.940 [2024-09-28 23:32:04.670903] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.940 [2024-09-28 23:32:04.670931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.940 [2024-09-28 23:32:04.670943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.940 [2024-09-28 23:32:04.670953] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.940 [2024-09-28 23:32:04.670961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.940 [2024-09-28 23:32:04.670968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.940 [2024-09-28 23:32:04.670976] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.940 [2024-09-28 23:32:04.670983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.940 [2024-09-28 23:32:04.670991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.940 [2024-09-28 23:32:04.670998] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.940 [2024-09-28 23:32:04.671006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.940 [2024-09-28 23:32:04.671012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.940 23:32:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.940 23:32:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.940 23:32:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.940 23:32:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:16.940 23:32:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:16.940 23:32:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.940 23:32:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.158 23:32:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.158 23:32:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.158 23:32:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:29.158 [2024-09-28 23:32:17.070245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:29.158 [2024-09-28 23:32:17.071383] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.158 [2024-09-28 23:32:17.071497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.158 [2024-09-28 23:32:17.071576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.158 [2024-09-28 23:32:17.071639] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.158 [2024-09-28 23:32:17.071658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.158 [2024-09-28 23:32:17.071714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.158 [2024-09-28 23:32:17.071853] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.158 [2024-09-28 23:32:17.071875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.158 [2024-09-28 23:32:17.071922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.158 [2024-09-28 23:32:17.071974] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.158 [2024-09-28 23:32:17.071993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.158 [2024-09-28 23:32:17.072057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.158 23:32:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.158 23:32:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.158 23:32:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:29.158 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:29.419 [2024-09-28 23:32:17.470242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:29.419 [2024-09-28 23:32:17.471186] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.419 [2024-09-28 23:32:17.471214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.419 [2024-09-28 23:32:17.471225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.419 [2024-09-28 23:32:17.471237] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.419 [2024-09-28 23:32:17.471247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.419 [2024-09-28 23:32:17.471254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.419 [2024-09-28 23:32:17.471263] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.419 [2024-09-28 23:32:17.471270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.419 [2024-09-28 23:32:17.471278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.419 [2024-09-28 23:32:17.471285] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.419 [2024-09-28 23:32:17.471293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.419 [2024-09-28 23:32:17.471299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.679 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:29.679 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.679 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.679 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.680 23:32:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.680 23:32:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.680 23:32:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:29.680 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:29.940 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.940 23:32:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.164 23:32:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.164 23:32:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.164 23:32:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.164 23:32:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.164 23:32:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.164 23:32:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:42.164 23:32:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:42.164 [2024-09-28 23:32:29.970477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:42.164 [2024-09-28 23:32:29.971457] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.164 [2024-09-28 23:32:29.971494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.164 [2024-09-28 23:32:29.971505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.164 [2024-09-28 23:32:29.971531] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.164 [2024-09-28 23:32:29.971539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.164 [2024-09-28 23:32:29.971548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.164 [2024-09-28 23:32:29.971555] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.164 [2024-09-28 23:32:29.971565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.164 [2024-09-28 23:32:29.971571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.164 [2024-09-28 23:32:29.971579] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.164 [2024-09-28 23:32:29.971585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.164 [2024-09-28 23:32:29.971593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.424 23:32:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.424 23:32:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.424 [2024-09-28 23:32:30.470477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:42.424 [2024-09-28 23:32:30.471371] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.424 [2024-09-28 23:32:30.471403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.424 [2024-09-28 23:32:30.471414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.424 [2024-09-28 23:32:30.471427] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.424 [2024-09-28 23:32:30.471436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.424 [2024-09-28 23:32:30.471443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.424 [2024-09-28 23:32:30.471452] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.424 [2024-09-28 23:32:30.471459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.424 [2024-09-28 23:32:30.471466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.424 [2024-09-28 23:32:30.471473] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.424 [2024-09-28 23:32:30.471482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.424 [2024-09-28 23:32:30.471488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.424 23:32:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.424 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:42.683 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:42.683 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.683 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.683 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.683 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:42.683 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:42.683 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.683 23:32:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.58 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.58 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.58 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.58 2 00:11:54.903 remove_attach_helper took 44.58s to complete (handling 2 nvme drive(s)) 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:54.903 23:32:42 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67723 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67723 ']' 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67723 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67723 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67723' 00:11:54.903 killing process with pid 67723 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67723 00:11:54.903 23:32:42 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67723 00:11:56.290 23:32:44 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:56.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:56.862 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:56.862 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:56.862 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.862 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:56.862 00:11:56.862 real 2m28.907s 00:11:56.862 user 1m50.540s 00:11:56.862 sys 0m16.855s 00:11:56.862 23:32:44 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.862 23:32:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.862 ************************************ 00:11:56.862 END TEST sw_hotplug 00:11:56.862 ************************************ 00:11:57.123 23:32:45 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:57.123 23:32:45 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:57.123 23:32:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:57.123 23:32:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.123 23:32:45 -- common/autotest_common.sh@10 -- # set +x 00:11:57.123 ************************************ 00:11:57.123 START TEST nvme_xnvme 00:11:57.123 ************************************ 00:11:57.123 23:32:45 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:57.123 * Looking for test storage... 00:11:57.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:57.123 23:32:45 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:57.123 23:32:45 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:11:57.123 23:32:45 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:57.123 23:32:45 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:57.123 23:32:45 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.123 23:32:45 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:57.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.123 --rc genhtml_branch_coverage=1 00:11:57.123 --rc genhtml_function_coverage=1 00:11:57.123 --rc genhtml_legend=1 00:11:57.123 --rc geninfo_all_blocks=1 00:11:57.123 --rc geninfo_unexecuted_blocks=1 00:11:57.123 00:11:57.123 ' 00:11:57.123 23:32:45 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:57.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.123 --rc genhtml_branch_coverage=1 00:11:57.123 --rc genhtml_function_coverage=1 00:11:57.123 --rc genhtml_legend=1 00:11:57.123 --rc geninfo_all_blocks=1 00:11:57.123 --rc geninfo_unexecuted_blocks=1 00:11:57.123 00:11:57.123 ' 00:11:57.123 23:32:45 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:57.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.123 --rc genhtml_branch_coverage=1 00:11:57.123 --rc genhtml_function_coverage=1 00:11:57.123 --rc genhtml_legend=1 00:11:57.123 --rc geninfo_all_blocks=1 00:11:57.123 --rc geninfo_unexecuted_blocks=1 00:11:57.123 00:11:57.123 ' 00:11:57.123 23:32:45 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:57.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.123 --rc genhtml_branch_coverage=1 00:11:57.123 --rc genhtml_function_coverage=1 00:11:57.123 --rc genhtml_legend=1 00:11:57.123 --rc geninfo_all_blocks=1 00:11:57.123 --rc geninfo_unexecuted_blocks=1 00:11:57.123 00:11:57.123 ' 00:11:57.123 23:32:45 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.123 23:32:45 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.123 23:32:45 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.123 23:32:45 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.123 23:32:45 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.123 23:32:45 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:57.123 23:32:45 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.124 23:32:45 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:57.124 23:32:45 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:57.124 23:32:45 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.124 23:32:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:57.124 ************************************ 00:11:57.124 START TEST xnvme_to_malloc_dd_copy 00:11:57.124 ************************************ 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:57.124 23:32:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:57.124 { 00:11:57.124 "subsystems": [ 00:11:57.124 { 00:11:57.124 "subsystem": "bdev", 00:11:57.124 "config": [ 00:11:57.124 { 00:11:57.124 "params": { 00:11:57.124 "block_size": 512, 00:11:57.124 "num_blocks": 2097152, 00:11:57.124 "name": "malloc0" 00:11:57.124 }, 00:11:57.124 "method": "bdev_malloc_create" 00:11:57.124 }, 00:11:57.124 { 00:11:57.124 "params": { 00:11:57.124 "io_mechanism": "libaio", 00:11:57.124 "filename": "/dev/nullb0", 00:11:57.124 "name": "null0" 00:11:57.124 }, 00:11:57.124 "method": "bdev_xnvme_create" 00:11:57.124 }, 00:11:57.124 { 00:11:57.124 "method": "bdev_wait_for_examine" 00:11:57.124 } 00:11:57.124 ] 00:11:57.124 } 00:11:57.124 ] 00:11:57.124 } 00:11:57.385 [2024-09-28 23:32:45.298581] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:57.385 [2024-09-28 23:32:45.298809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69086 ] 00:11:57.385 [2024-09-28 23:32:45.450448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.647 [2024-09-28 23:32:45.673316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.842  Copying: 227/1024 [MB] (227 MBps) Copying: 452/1024 [MB] (225 MBps) Copying: 708/1024 [MB] (255 MBps) Copying: 1009/1024 [MB] (301 MBps) Copying: 1024/1024 [MB] (average 252 MBps) 00:12:04.842 00:12:04.842 23:32:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:04.842 23:32:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:04.842 23:32:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:04.842 23:32:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:04.842 { 00:12:04.842 "subsystems": [ 00:12:04.842 { 00:12:04.842 "subsystem": "bdev", 00:12:04.842 "config": [ 00:12:04.842 { 00:12:04.842 "params": { 00:12:04.842 "block_size": 512, 00:12:04.842 "num_blocks": 2097152, 00:12:04.842 "name": "malloc0" 00:12:04.842 }, 00:12:04.842 "method": "bdev_malloc_create" 00:12:04.842 }, 00:12:04.842 { 00:12:04.842 "params": { 00:12:04.842 "io_mechanism": "libaio", 00:12:04.842 "filename": "/dev/nullb0", 00:12:04.842 "name": "null0" 00:12:04.842 }, 00:12:04.842 "method": "bdev_xnvme_create" 00:12:04.842 }, 00:12:04.842 { 00:12:04.842 "method": "bdev_wait_for_examine" 00:12:04.842 } 00:12:04.842 ] 00:12:04.842 } 00:12:04.842 ] 00:12:04.842 } 00:12:04.842 [2024-09-28 23:32:52.855917] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:04.843 [2024-09-28 23:32:52.856048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69180 ] 00:12:04.843 [2024-09-28 23:32:53.008444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.101 [2024-09-28 23:32:53.142952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.109  Copying: 233/1024 [MB] (233 MBps) Copying: 535/1024 [MB] (302 MBps) Copying: 839/1024 [MB] (304 MBps) Copying: 1024/1024 [MB] (average 283 MBps) 00:12:12.109 00:12:12.109 23:32:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:12.109 23:32:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:12.109 23:32:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:12.109 23:32:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:12.109 23:32:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:12.109 23:32:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:12.109 { 00:12:12.109 "subsystems": [ 00:12:12.109 { 00:12:12.109 "subsystem": "bdev", 00:12:12.109 "config": [ 00:12:12.109 { 00:12:12.109 "params": { 00:12:12.109 "block_size": 512, 00:12:12.109 "num_blocks": 2097152, 00:12:12.109 "name": "malloc0" 00:12:12.109 }, 00:12:12.109 "method": "bdev_malloc_create" 00:12:12.109 }, 00:12:12.109 { 00:12:12.109 "params": { 00:12:12.109 "io_mechanism": "io_uring", 00:12:12.109 "filename": "/dev/nullb0", 00:12:12.109 "name": "null0" 00:12:12.109 }, 00:12:12.109 "method": "bdev_xnvme_create" 00:12:12.109 }, 00:12:12.109 { 00:12:12.109 "method": "bdev_wait_for_examine" 00:12:12.109 } 00:12:12.109 ] 00:12:12.109 } 00:12:12.109 ] 00:12:12.109 } 00:12:12.109 [2024-09-28 23:32:59.749078] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:12.109 [2024-09-28 23:32:59.749370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69256 ] 00:12:12.109 [2024-09-28 23:32:59.897081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.109 [2024-09-28 23:33:00.039707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.082  Copying: 311/1024 [MB] (311 MBps) Copying: 623/1024 [MB] (312 MBps) Copying: 935/1024 [MB] (311 MBps) Copying: 1024/1024 [MB] (average 311 MBps) 00:12:18.082 00:12:18.082 23:33:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:18.082 23:33:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:18.082 23:33:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:18.082 23:33:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:18.082 { 00:12:18.082 "subsystems": [ 00:12:18.082 { 00:12:18.082 "subsystem": "bdev", 00:12:18.082 "config": [ 00:12:18.082 { 00:12:18.082 "params": { 00:12:18.082 "block_size": 512, 00:12:18.082 "num_blocks": 2097152, 00:12:18.082 "name": "malloc0" 00:12:18.082 }, 00:12:18.082 "method": "bdev_malloc_create" 00:12:18.082 }, 00:12:18.082 { 00:12:18.082 "params": { 00:12:18.082 "io_mechanism": "io_uring", 00:12:18.082 "filename": "/dev/nullb0", 00:12:18.082 "name": "null0" 00:12:18.082 }, 00:12:18.082 "method": "bdev_xnvme_create" 00:12:18.082 }, 00:12:18.082 { 00:12:18.082 "method": "bdev_wait_for_examine" 00:12:18.082 } 00:12:18.082 ] 00:12:18.082 } 00:12:18.082 ] 00:12:18.082 } 00:12:18.082 [2024-09-28 23:33:06.126375] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:18.082 [2024-09-28 23:33:06.126494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69338 ] 00:12:18.340 [2024-09-28 23:33:06.276072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.340 [2024-09-28 23:33:06.418544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.275  Copying: 315/1024 [MB] (315 MBps) Copying: 630/1024 [MB] (315 MBps) Copying: 944/1024 [MB] (314 MBps) Copying: 1024/1024 [MB] (average 314 MBps) 00:12:24.275 00:12:24.275 23:33:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:24.275 23:33:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:24.275 00:12:24.275 real 0m27.182s 00:12:24.275 user 0m23.853s 00:12:24.275 sys 0m2.778s 00:12:24.275 23:33:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.275 23:33:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:24.275 ************************************ 00:12:24.275 END TEST xnvme_to_malloc_dd_copy 00:12:24.275 ************************************ 00:12:24.534 23:33:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:24.534 23:33:12 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:24.534 23:33:12 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.534 23:33:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.534 ************************************ 00:12:24.534 START TEST xnvme_bdevperf 00:12:24.534 ************************************ 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:24.534 23:33:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:24.534 { 00:12:24.534 "subsystems": [ 00:12:24.534 { 00:12:24.534 "subsystem": "bdev", 00:12:24.534 "config": [ 00:12:24.534 { 00:12:24.534 "params": { 00:12:24.534 "io_mechanism": "libaio", 00:12:24.534 "filename": "/dev/nullb0", 00:12:24.534 "name": "null0" 00:12:24.534 }, 00:12:24.534 "method": "bdev_xnvme_create" 00:12:24.534 }, 00:12:24.534 { 00:12:24.534 "method": "bdev_wait_for_examine" 00:12:24.534 } 00:12:24.534 ] 00:12:24.534 } 00:12:24.534 ] 00:12:24.534 } 00:12:24.534 [2024-09-28 23:33:12.551325] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:24.534 [2024-09-28 23:33:12.551437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69441 ] 00:12:24.792 [2024-09-28 23:33:12.703248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.792 [2024-09-28 23:33:12.879693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.052 Running I/O for 5 seconds... 00:12:30.204 152704.00 IOPS, 596.50 MiB/s 163040.00 IOPS, 636.88 MiB/s 175680.00 IOPS, 686.25 MiB/s 181952.00 IOPS, 710.75 MiB/s 00:12:30.204 Latency(us) 00:12:30.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.204 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:30.204 null0 : 5.00 185592.42 724.97 0.00 0.00 342.35 115.79 2066.90 00:12:30.204 =================================================================================================================== 00:12:30.204 Total : 185592.42 724.97 0.00 0.00 342.35 115.79 2066.90 00:12:30.776 23:33:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:30.776 23:33:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:30.776 23:33:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:30.776 23:33:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:30.776 23:33:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:30.776 23:33:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:30.776 { 00:12:30.776 "subsystems": [ 00:12:30.776 { 00:12:30.776 "subsystem": "bdev", 00:12:30.776 "config": [ 00:12:30.776 { 00:12:30.776 "params": { 00:12:30.776 "io_mechanism": "io_uring", 00:12:30.776 "filename": "/dev/nullb0", 00:12:30.776 "name": "null0" 00:12:30.776 }, 00:12:30.776 "method": "bdev_xnvme_create" 00:12:30.776 }, 00:12:30.776 { 00:12:30.776 "method": "bdev_wait_for_examine" 00:12:30.776 } 00:12:30.776 ] 00:12:30.776 } 00:12:30.776 ] 00:12:30.776 } 00:12:30.776 [2024-09-28 23:33:18.857275] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:30.776 [2024-09-28 23:33:18.857415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69515 ] 00:12:31.038 [2024-09-28 23:33:19.012155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.038 [2024-09-28 23:33:19.158725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.299 Running I/O for 5 seconds... 00:12:36.548 230656.00 IOPS, 901.00 MiB/s 230976.00 IOPS, 902.25 MiB/s 231168.00 IOPS, 903.00 MiB/s 231264.00 IOPS, 903.38 MiB/s 00:12:36.548 Latency(us) 00:12:36.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.548 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:36.548 null0 : 5.00 231298.98 903.51 0.00 0.00 274.30 150.45 1524.97 00:12:36.548 =================================================================================================================== 00:12:36.548 Total : 231298.98 903.51 0.00 0.00 274.30 150.45 1524.97 00:12:36.808 23:33:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:36.808 23:33:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:37.070 00:12:37.070 real 0m12.547s 00:12:37.070 user 0m10.078s 00:12:37.070 sys 0m2.237s 00:12:37.070 23:33:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.070 ************************************ 00:12:37.070 END TEST xnvme_bdevperf 00:12:37.070 ************************************ 00:12:37.070 23:33:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.070 ************************************ 00:12:37.070 END TEST nvme_xnvme 00:12:37.070 ************************************ 00:12:37.070 00:12:37.070 real 0m39.999s 00:12:37.070 user 0m34.048s 00:12:37.070 sys 0m5.144s 00:12:37.070 23:33:25 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.070 23:33:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.070 23:33:25 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:37.070 23:33:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:37.070 23:33:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.070 23:33:25 -- common/autotest_common.sh@10 -- # set +x 00:12:37.070 ************************************ 00:12:37.070 START TEST blockdev_xnvme 00:12:37.070 ************************************ 00:12:37.070 23:33:25 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:37.070 * Looking for test storage... 00:12:37.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:37.070 23:33:25 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:37.070 23:33:25 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:37.070 23:33:25 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.332 23:33:25 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:37.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.332 --rc genhtml_branch_coverage=1 00:12:37.332 --rc genhtml_function_coverage=1 00:12:37.332 --rc genhtml_legend=1 00:12:37.332 --rc geninfo_all_blocks=1 00:12:37.332 --rc geninfo_unexecuted_blocks=1 00:12:37.332 00:12:37.332 ' 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:37.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.332 --rc genhtml_branch_coverage=1 00:12:37.332 --rc genhtml_function_coverage=1 00:12:37.332 --rc genhtml_legend=1 00:12:37.332 --rc geninfo_all_blocks=1 00:12:37.332 --rc geninfo_unexecuted_blocks=1 00:12:37.332 00:12:37.332 ' 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:37.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.332 --rc genhtml_branch_coverage=1 00:12:37.332 --rc genhtml_function_coverage=1 00:12:37.332 --rc genhtml_legend=1 00:12:37.332 --rc geninfo_all_blocks=1 00:12:37.332 --rc geninfo_unexecuted_blocks=1 00:12:37.332 00:12:37.332 ' 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:37.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.332 --rc genhtml_branch_coverage=1 00:12:37.332 --rc genhtml_function_coverage=1 00:12:37.332 --rc genhtml_legend=1 00:12:37.332 --rc geninfo_all_blocks=1 00:12:37.332 --rc geninfo_unexecuted_blocks=1 00:12:37.332 00:12:37.332 ' 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69657 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69657 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69657 ']' 00:12:37.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.332 23:33:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.332 23:33:25 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:37.332 [2024-09-28 23:33:25.365105] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:37.332 [2024-09-28 23:33:25.365263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69657 ] 00:12:37.593 [2024-09-28 23:33:25.516803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.593 [2024-09-28 23:33:25.670419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.164 23:33:26 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:38.164 23:33:26 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:38.164 23:33:26 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:38.164 23:33:26 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:38.164 23:33:26 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:38.164 23:33:26 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:38.164 23:33:26 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:38.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:38.685 Waiting for block devices as requested 00:12:38.685 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:38.685 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:38.685 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:38.685 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:43.976 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.976 nvme0n1 00:12:43.976 nvme1n1 00:12:43.976 nvme2n1 00:12:43.976 nvme2n2 00:12:43.976 nvme2n3 00:12:43.976 nvme3n1 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.976 23:33:31 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:43.976 23:33:31 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.977 23:33:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "4cb9a717-3e2b-48a9-bbd7-0429c2625ce0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4cb9a717-3e2b-48a9-bbd7-0429c2625ce0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3f9479bb-4da9-402f-8150-9dd81aa3a603"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3f9479bb-4da9-402f-8150-9dd81aa3a603",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "df61896d-372c-44a7-97b2-72e19b748aab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "df61896d-372c-44a7-97b2-72e19b748aab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e26de1f5-5c81-45c1-8fa2-3699384b9d77"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e26de1f5-5c81-45c1-8fa2-3699384b9d77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "97fae3ae-ca69-4584-8b44-8b24da97528b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "97fae3ae-ca69-4584-8b44-8b24da97528b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5c62c549-f95c-49a5-bc76-119d64a0645e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5c62c549-f95c-49a5-bc76-119d64a0645e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:43.977 23:33:32 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69657 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69657 ']' 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69657 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.977 23:33:32 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69657 00:12:44.238 23:33:32 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:44.238 23:33:32 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:44.238 killing process with pid 69657 00:12:44.238 23:33:32 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69657' 00:12:44.238 23:33:32 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69657 00:12:44.238 23:33:32 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69657 00:12:45.622 23:33:33 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:45.622 23:33:33 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:45.622 23:33:33 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:45.622 23:33:33 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.622 23:33:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.622 ************************************ 00:12:45.622 START TEST bdev_hello_world 00:12:45.622 ************************************ 00:12:45.622 23:33:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:45.622 [2024-09-28 23:33:33.718352] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:45.622 [2024-09-28 23:33:33.718442] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70021 ] 00:12:45.883 [2024-09-28 23:33:33.855005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.883 [2024-09-28 23:33:33.995018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.143 [2024-09-28 23:33:34.276655] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:46.143 [2024-09-28 23:33:34.276693] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:46.143 [2024-09-28 23:33:34.276706] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:46.143 [2024-09-28 23:33:34.278157] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:46.143 [2024-09-28 23:33:34.278770] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:46.143 [2024-09-28 23:33:34.278798] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:46.143 [2024-09-28 23:33:34.279674] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:46.143 00:12:46.143 [2024-09-28 23:33:34.279714] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:47.079 00:12:47.080 real 0m1.244s 00:12:47.080 user 0m0.979s 00:12:47.080 sys 0m0.153s 00:12:47.080 23:33:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.080 23:33:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:47.080 ************************************ 00:12:47.080 END TEST bdev_hello_world 00:12:47.080 ************************************ 00:12:47.080 23:33:34 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:47.080 23:33:34 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:47.080 23:33:34 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.080 23:33:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.080 ************************************ 00:12:47.080 START TEST bdev_bounds 00:12:47.080 ************************************ 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=70052 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:47.080 Process bdevio pid: 70052 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 70052' 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 70052 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 70052 ']' 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:47.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:47.080 23:33:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:47.080 [2024-09-28 23:33:35.034966] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:47.080 [2024-09-28 23:33:35.035107] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70052 ] 00:12:47.080 [2024-09-28 23:33:35.188305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:47.341 [2024-09-28 23:33:35.410154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.341 [2024-09-28 23:33:35.410485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.341 [2024-09-28 23:33:35.410565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.911 23:33:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.911 23:33:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:47.911 23:33:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:47.911 I/O targets: 00:12:47.911 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:47.911 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:47.911 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:47.911 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:47.911 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:47.911 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:47.911 00:12:47.911 00:12:47.911 CUnit - A unit testing framework for C - Version 2.1-3 00:12:47.911 http://cunit.sourceforge.net/ 00:12:47.911 00:12:47.911 00:12:47.911 Suite: bdevio tests on: nvme3n1 00:12:47.911 Test: blockdev write read block ...passed 00:12:47.911 Test: blockdev write zeroes read block ...passed 00:12:47.911 Test: blockdev write zeroes read no split ...passed 00:12:47.911 Test: blockdev write zeroes read split ...passed 00:12:47.911 Test: blockdev write zeroes read split partial ...passed 00:12:47.911 Test: blockdev reset ...passed 00:12:47.911 Test: blockdev write read 8 blocks ...passed 00:12:47.911 Test: blockdev write read size > 128k ...passed 00:12:47.911 Test: blockdev write read invalid size ...passed 00:12:47.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:47.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:47.911 Test: blockdev write read max offset ...passed 00:12:47.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:47.911 Test: blockdev writev readv 8 blocks ...passed 00:12:47.911 Test: blockdev writev readv 30 x 1block ...passed 00:12:47.911 Test: blockdev writev readv block ...passed 00:12:48.169 Test: blockdev writev readv size > 128k ...passed 00:12:48.169 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.169 Test: blockdev comparev and writev ...passed 00:12:48.169 Test: blockdev nvme passthru rw ...passed 00:12:48.169 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.169 Test: blockdev nvme admin passthru ...passed 00:12:48.169 Test: blockdev copy ...passed 00:12:48.169 Suite: bdevio tests on: nvme2n3 00:12:48.169 Test: blockdev write read block ...passed 00:12:48.169 Test: blockdev write zeroes read block ...passed 00:12:48.169 Test: blockdev write zeroes read no split ...passed 00:12:48.169 Test: blockdev write zeroes read split ...passed 00:12:48.169 Test: blockdev write zeroes read split partial ...passed 00:12:48.169 Test: blockdev reset ...passed 00:12:48.169 Test: blockdev write read 8 blocks ...passed 00:12:48.169 Test: blockdev write read size > 128k ...passed 00:12:48.169 Test: blockdev write read invalid size ...passed 00:12:48.169 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.169 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.169 Test: blockdev write read max offset ...passed 00:12:48.169 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.169 Test: blockdev writev readv 8 blocks ...passed 00:12:48.169 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.169 Test: blockdev writev readv block ...passed 00:12:48.169 Test: blockdev writev readv size > 128k ...passed 00:12:48.169 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.169 Test: blockdev comparev and writev ...passed 00:12:48.169 Test: blockdev nvme passthru rw ...passed 00:12:48.169 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.169 Test: blockdev nvme admin passthru ...passed 00:12:48.169 Test: blockdev copy ...passed 00:12:48.169 Suite: bdevio tests on: nvme2n2 00:12:48.169 Test: blockdev write read block ...passed 00:12:48.169 Test: blockdev write zeroes read block ...passed 00:12:48.169 Test: blockdev write zeroes read no split ...passed 00:12:48.169 Test: blockdev write zeroes read split ...passed 00:12:48.169 Test: blockdev write zeroes read split partial ...passed 00:12:48.169 Test: blockdev reset ...passed 00:12:48.169 Test: blockdev write read 8 blocks ...passed 00:12:48.169 Test: blockdev write read size > 128k ...passed 00:12:48.169 Test: blockdev write read invalid size ...passed 00:12:48.169 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.169 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.169 Test: blockdev write read max offset ...passed 00:12:48.169 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.169 Test: blockdev writev readv 8 blocks ...passed 00:12:48.169 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.169 Test: blockdev writev readv block ...passed 00:12:48.169 Test: blockdev writev readv size > 128k ...passed 00:12:48.169 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.169 Test: blockdev comparev and writev ...passed 00:12:48.169 Test: blockdev nvme passthru rw ...passed 00:12:48.169 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.169 Test: blockdev nvme admin passthru ...passed 00:12:48.169 Test: blockdev copy ...passed 00:12:48.169 Suite: bdevio tests on: nvme2n1 00:12:48.169 Test: blockdev write read block ...passed 00:12:48.169 Test: blockdev write zeroes read block ...passed 00:12:48.169 Test: blockdev write zeroes read no split ...passed 00:12:48.169 Test: blockdev write zeroes read split ...passed 00:12:48.169 Test: blockdev write zeroes read split partial ...passed 00:12:48.169 Test: blockdev reset ...passed 00:12:48.169 Test: blockdev write read 8 blocks ...passed 00:12:48.169 Test: blockdev write read size > 128k ...passed 00:12:48.169 Test: blockdev write read invalid size ...passed 00:12:48.169 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.169 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.169 Test: blockdev write read max offset ...passed 00:12:48.169 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.169 Test: blockdev writev readv 8 blocks ...passed 00:12:48.169 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.169 Test: blockdev writev readv block ...passed 00:12:48.169 Test: blockdev writev readv size > 128k ...passed 00:12:48.169 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.169 Test: blockdev comparev and writev ...passed 00:12:48.169 Test: blockdev nvme passthru rw ...passed 00:12:48.169 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.169 Test: blockdev nvme admin passthru ...passed 00:12:48.169 Test: blockdev copy ...passed 00:12:48.169 Suite: bdevio tests on: nvme1n1 00:12:48.169 Test: blockdev write read block ...passed 00:12:48.169 Test: blockdev write zeroes read block ...passed 00:12:48.169 Test: blockdev write zeroes read no split ...passed 00:12:48.428 Test: blockdev write zeroes read split ...passed 00:12:48.428 Test: blockdev write zeroes read split partial ...passed 00:12:48.428 Test: blockdev reset ...passed 00:12:48.428 Test: blockdev write read 8 blocks ...passed 00:12:48.428 Test: blockdev write read size > 128k ...passed 00:12:48.428 Test: blockdev write read invalid size ...passed 00:12:48.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.428 Test: blockdev write read max offset ...passed 00:12:48.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.428 Test: blockdev writev readv 8 blocks ...passed 00:12:48.428 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.428 Test: blockdev writev readv block ...passed 00:12:48.428 Test: blockdev writev readv size > 128k ...passed 00:12:48.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.428 Test: blockdev comparev and writev ...passed 00:12:48.428 Test: blockdev nvme passthru rw ...passed 00:12:48.428 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.428 Test: blockdev nvme admin passthru ...passed 00:12:48.428 Test: blockdev copy ...passed 00:12:48.428 Suite: bdevio tests on: nvme0n1 00:12:48.428 Test: blockdev write read block ...passed 00:12:48.428 Test: blockdev write zeroes read block ...passed 00:12:48.428 Test: blockdev write zeroes read no split ...passed 00:12:48.428 Test: blockdev write zeroes read split ...passed 00:12:48.428 Test: blockdev write zeroes read split partial ...passed 00:12:48.428 Test: blockdev reset ...passed 00:12:48.428 Test: blockdev write read 8 blocks ...passed 00:12:48.428 Test: blockdev write read size > 128k ...passed 00:12:48.428 Test: blockdev write read invalid size ...passed 00:12:48.428 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.428 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.428 Test: blockdev write read max offset ...passed 00:12:48.428 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.428 Test: blockdev writev readv 8 blocks ...passed 00:12:48.428 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.428 Test: blockdev writev readv block ...passed 00:12:48.428 Test: blockdev writev readv size > 128k ...passed 00:12:48.428 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.428 Test: blockdev comparev and writev ...passed 00:12:48.428 Test: blockdev nvme passthru rw ...passed 00:12:48.428 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.428 Test: blockdev nvme admin passthru ...passed 00:12:48.428 Test: blockdev copy ...passed 00:12:48.428 00:12:48.428 Run Summary: Type Total Ran Passed Failed Inactive 00:12:48.428 suites 6 6 n/a 0 0 00:12:48.428 tests 138 138 138 0 0 00:12:48.428 asserts 780 780 780 0 n/a 00:12:48.428 00:12:48.428 Elapsed time = 1.172 seconds 00:12:48.428 0 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 70052 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 70052 ']' 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 70052 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70052 00:12:48.428 killing process with pid 70052 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70052' 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 70052 00:12:48.428 23:33:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 70052 00:12:49.366 23:33:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:49.366 00:12:49.366 real 0m2.365s 00:12:49.366 user 0m5.499s 00:12:49.366 sys 0m0.403s 00:12:49.366 23:33:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.366 23:33:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:49.366 ************************************ 00:12:49.366 END TEST bdev_bounds 00:12:49.366 ************************************ 00:12:49.366 23:33:37 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:49.366 23:33:37 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:49.366 23:33:37 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.366 23:33:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.366 ************************************ 00:12:49.366 START TEST bdev_nbd 00:12:49.366 ************************************ 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:49.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=70108 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 70108 /var/tmp/spdk-nbd.sock 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 70108 ']' 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:49.366 23:33:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:49.366 [2024-09-28 23:33:37.457342] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:49.366 [2024-09-28 23:33:37.457771] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.625 [2024-09-28 23:33:37.608723] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.625 [2024-09-28 23:33:37.763151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:50.262 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.535 1+0 records in 00:12:50.535 1+0 records out 00:12:50.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751067 s, 5.5 MB/s 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:50.535 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.795 1+0 records in 00:12:50.795 1+0 records out 00:12:50.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767049 s, 5.3 MB/s 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.795 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:50.796 23:33:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.054 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.054 1+0 records in 00:12:51.054 1+0 records out 00:12:51.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00097644 s, 4.2 MB/s 00:12:51.055 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.055 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.055 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.055 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.055 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.055 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.055 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.055 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.314 1+0 records in 00:12:51.314 1+0 records out 00:12:51.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132024 s, 3.1 MB/s 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.314 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.315 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.575 1+0 records in 00:12:51.575 1+0 records out 00:12:51.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109753 s, 3.7 MB/s 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.575 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.836 1+0 records in 00:12:51.836 1+0 records out 00:12:51.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108223 s, 3.8 MB/s 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.836 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.837 23:33:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.837 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.837 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.837 23:33:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd0", 00:12:52.096 "bdev_name": "nvme0n1" 00:12:52.096 }, 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd1", 00:12:52.096 "bdev_name": "nvme1n1" 00:12:52.096 }, 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd2", 00:12:52.096 "bdev_name": "nvme2n1" 00:12:52.096 }, 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd3", 00:12:52.096 "bdev_name": "nvme2n2" 00:12:52.096 }, 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd4", 00:12:52.096 "bdev_name": "nvme2n3" 00:12:52.096 }, 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd5", 00:12:52.096 "bdev_name": "nvme3n1" 00:12:52.096 } 00:12:52.096 ]' 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd0", 00:12:52.096 "bdev_name": "nvme0n1" 00:12:52.096 }, 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd1", 00:12:52.096 "bdev_name": "nvme1n1" 00:12:52.096 }, 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd2", 00:12:52.096 "bdev_name": "nvme2n1" 00:12:52.096 }, 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd3", 00:12:52.096 "bdev_name": "nvme2n2" 00:12:52.096 }, 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd4", 00:12:52.096 "bdev_name": "nvme2n3" 00:12:52.096 }, 00:12:52.096 { 00:12:52.096 "nbd_device": "/dev/nbd5", 00:12:52.096 "bdev_name": "nvme3n1" 00:12:52.096 } 00:12:52.096 ]' 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.096 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:52.357 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.357 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.357 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.357 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.357 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.357 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.357 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.357 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.357 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.357 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.618 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:52.877 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:52.877 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:52.877 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:52.877 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.877 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.877 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:52.878 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.878 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.878 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.878 23:33:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:53.138 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:53.138 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:53.138 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:53.138 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.138 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.138 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:53.138 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.138 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.138 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.138 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.398 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:53.659 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:53.659 /dev/nbd0 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.920 1+0 records in 00:12:53.920 1+0 records out 00:12:53.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120846 s, 3.4 MB/s 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:53.920 23:33:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:53.920 /dev/nbd1 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.920 1+0 records in 00:12:53.920 1+0 records out 00:12:53.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00157219 s, 2.6 MB/s 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:53.920 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:54.182 /dev/nbd10 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.182 1+0 records in 00:12:54.182 1+0 records out 00:12:54.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106955 s, 3.8 MB/s 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.182 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:54.444 /dev/nbd11 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.444 1+0 records in 00:12:54.444 1+0 records out 00:12:54.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00390964 s, 1.0 MB/s 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.444 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:54.705 /dev/nbd12 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.705 1+0 records in 00:12:54.705 1+0 records out 00:12:54.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000774843 s, 5.3 MB/s 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.705 23:33:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:54.966 /dev/nbd13 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.967 1+0 records in 00:12:54.967 1+0 records out 00:12:54.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118272 s, 3.5 MB/s 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.967 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd0", 00:12:55.228 "bdev_name": "nvme0n1" 00:12:55.228 }, 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd1", 00:12:55.228 "bdev_name": "nvme1n1" 00:12:55.228 }, 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd10", 00:12:55.228 "bdev_name": "nvme2n1" 00:12:55.228 }, 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd11", 00:12:55.228 "bdev_name": "nvme2n2" 00:12:55.228 }, 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd12", 00:12:55.228 "bdev_name": "nvme2n3" 00:12:55.228 }, 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd13", 00:12:55.228 "bdev_name": "nvme3n1" 00:12:55.228 } 00:12:55.228 ]' 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd0", 00:12:55.228 "bdev_name": "nvme0n1" 00:12:55.228 }, 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd1", 00:12:55.228 "bdev_name": "nvme1n1" 00:12:55.228 }, 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd10", 00:12:55.228 "bdev_name": "nvme2n1" 00:12:55.228 }, 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd11", 00:12:55.228 "bdev_name": "nvme2n2" 00:12:55.228 }, 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd12", 00:12:55.228 "bdev_name": "nvme2n3" 00:12:55.228 }, 00:12:55.228 { 00:12:55.228 "nbd_device": "/dev/nbd13", 00:12:55.228 "bdev_name": "nvme3n1" 00:12:55.228 } 00:12:55.228 ]' 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:55.228 /dev/nbd1 00:12:55.228 /dev/nbd10 00:12:55.228 /dev/nbd11 00:12:55.228 /dev/nbd12 00:12:55.228 /dev/nbd13' 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:55.228 /dev/nbd1 00:12:55.228 /dev/nbd10 00:12:55.228 /dev/nbd11 00:12:55.228 /dev/nbd12 00:12:55.228 /dev/nbd13' 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:55.228 256+0 records in 00:12:55.228 256+0 records out 00:12:55.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423081 s, 248 MB/s 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.228 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:55.490 256+0 records in 00:12:55.490 256+0 records out 00:12:55.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2427 s, 4.3 MB/s 00:12:55.490 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.490 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:55.752 256+0 records in 00:12:55.752 256+0 records out 00:12:55.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.305454 s, 3.4 MB/s 00:12:55.752 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.752 23:33:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:56.012 256+0 records in 00:12:56.012 256+0 records out 00:12:56.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245006 s, 4.3 MB/s 00:12:56.012 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.012 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:56.274 256+0 records in 00:12:56.274 256+0 records out 00:12:56.274 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242303 s, 4.3 MB/s 00:12:56.274 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.274 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:56.533 256+0 records in 00:12:56.533 256+0 records out 00:12:56.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247381 s, 4.2 MB/s 00:12:56.533 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.534 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:56.795 256+0 records in 00:12:56.795 256+0 records out 00:12:56.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.258102 s, 4.1 MB/s 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.795 23:33:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:57.057 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:57.057 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:57.057 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:57.057 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.057 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.057 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:57.057 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.057 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.057 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.057 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:57.318 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:57.318 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:57.318 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:57.318 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.318 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.318 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:57.318 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.318 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.318 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.318 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:57.579 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:57.579 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:57.579 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:57.579 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.579 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.579 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:57.579 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.579 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.579 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.579 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:57.841 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:57.841 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:57.841 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:57.841 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.841 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.841 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:57.841 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.841 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.841 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.841 23:33:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:58.102 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:58.102 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:58.102 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:58.102 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.102 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.102 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:58.102 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.102 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.102 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.102 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:58.363 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:58.625 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:58.625 malloc_lvol_verify 00:12:58.886 23:33:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:58.886 574397ea-13fe-44a4-a9f9-dec00a86ab78 00:12:58.886 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:59.147 04a7cf5b-9c06-466e-ab0d-b20c1d373cfe 00:12:59.147 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:59.409 /dev/nbd0 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:59.409 mke2fs 1.47.0 (5-Feb-2023) 00:12:59.409 Discarding device blocks: 0/4096 done 00:12:59.409 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:59.409 00:12:59.409 Allocating group tables: 0/1 done 00:12:59.409 Writing inode tables: 0/1 done 00:12:59.409 Creating journal (1024 blocks): done 00:12:59.409 Writing superblocks and filesystem accounting information: 0/1 done 00:12:59.409 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:59.409 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.410 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 70108 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 70108 ']' 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 70108 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70108 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.672 killing process with pid 70108 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70108' 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 70108 00:12:59.672 23:33:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 70108 00:13:01.058 23:33:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:01.058 00:13:01.058 real 0m11.406s 00:13:01.058 user 0m15.175s 00:13:01.058 sys 0m3.751s 00:13:01.058 ************************************ 00:13:01.058 END TEST bdev_nbd 00:13:01.058 ************************************ 00:13:01.058 23:33:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.058 23:33:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:01.058 23:33:48 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:01.058 23:33:48 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:01.058 23:33:48 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:01.058 23:33:48 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:01.058 23:33:48 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:01.058 23:33:48 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.058 23:33:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.058 ************************************ 00:13:01.058 START TEST bdev_fio 00:13:01.058 ************************************ 00:13:01.058 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:01.058 ************************************ 00:13:01.058 START TEST bdev_fio_rw_verify 00:13:01.058 ************************************ 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:01.058 23:33:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.058 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.058 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.058 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.058 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.058 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.058 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.058 fio-3.35 00:13:01.058 Starting 6 threads 00:13:13.297 00:13:13.297 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70525: Sat Sep 28 23:33:59 2024 00:13:13.297 read: IOPS=11.7k, BW=45.9MiB/s (48.1MB/s)(459MiB/10002msec) 00:13:13.297 slat (usec): min=2, max=3153, avg= 7.35, stdev=17.29 00:13:13.297 clat (usec): min=109, max=9540, avg=1718.33, stdev=830.01 00:13:13.297 lat (usec): min=112, max=9559, avg=1725.67, stdev=830.59 00:13:13.297 clat percentiles (usec): 00:13:13.297 | 50.000th=[ 1614], 99.000th=[ 4293], 99.900th=[ 5997], 99.990th=[ 7635], 00:13:13.297 | 99.999th=[ 9503] 00:13:13.297 write: IOPS=12.1k, BW=47.3MiB/s (49.6MB/s)(473MiB/10002msec); 0 zone resets 00:13:13.297 slat (usec): min=13, max=4549, avg=44.58, stdev=159.01 00:13:13.297 clat (usec): min=121, max=8625, avg=1950.73, stdev=893.57 00:13:13.297 lat (usec): min=138, max=8664, avg=1995.31, stdev=906.81 00:13:13.297 clat percentiles (usec): 00:13:13.297 | 50.000th=[ 1827], 99.000th=[ 4752], 99.900th=[ 6325], 99.990th=[ 7898], 00:13:13.297 | 99.999th=[ 8586] 00:13:13.297 bw ( KiB/s): min=40450, max=60115, per=100.00%, avg=48563.58, stdev=693.92, samples=114 00:13:13.297 iops : min=10111, max=15028, avg=12140.00, stdev=173.48, samples=114 00:13:13.297 lat (usec) : 250=0.51%, 500=2.49%, 750=4.40%, 1000=7.09% 00:13:13.297 lat (msec) : 2=49.59%, 4=33.70%, 10=2.22% 00:13:13.297 cpu : usr=47.09%, sys=30.69%, ctx=4795, majf=0, minf=12872 00:13:13.297 IO depths : 1=11.4%, 2=23.8%, 4=51.2%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.297 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.297 issued rwts: total=117471,121110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.297 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:13.297 00:13:13.297 Run status group 0 (all jobs): 00:13:13.297 READ: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=459MiB (481MB), run=10002-10002msec 00:13:13.297 WRITE: bw=47.3MiB/s (49.6MB/s), 47.3MiB/s-47.3MiB/s (49.6MB/s-49.6MB/s), io=473MiB (496MB), run=10002-10002msec 00:13:13.297 ----------------------------------------------------- 00:13:13.297 Suppressions used: 00:13:13.297 count bytes template 00:13:13.297 6 48 /usr/src/fio/parse.c 00:13:13.297 3562 341952 /usr/src/fio/iolog.c 00:13:13.297 1 8 libtcmalloc_minimal.so 00:13:13.297 1 904 libcrypto.so 00:13:13.297 ----------------------------------------------------- 00:13:13.297 00:13:13.297 00:13:13.297 real 0m12.056s 00:13:13.297 user 0m29.832s 00:13:13.297 sys 0m18.768s 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.297 ************************************ 00:13:13.297 END TEST bdev_fio_rw_verify 00:13:13.297 ************************************ 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:13.297 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:13.298 23:34:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "4cb9a717-3e2b-48a9-bbd7-0429c2625ce0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4cb9a717-3e2b-48a9-bbd7-0429c2625ce0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3f9479bb-4da9-402f-8150-9dd81aa3a603"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3f9479bb-4da9-402f-8150-9dd81aa3a603",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "df61896d-372c-44a7-97b2-72e19b748aab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "df61896d-372c-44a7-97b2-72e19b748aab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e26de1f5-5c81-45c1-8fa2-3699384b9d77"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e26de1f5-5c81-45c1-8fa2-3699384b9d77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "97fae3ae-ca69-4584-8b44-8b24da97528b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "97fae3ae-ca69-4584-8b44-8b24da97528b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5c62c549-f95c-49a5-bc76-119d64a0645e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5c62c549-f95c-49a5-bc76-119d64a0645e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:13.298 23:34:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:13.298 23:34:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:13.298 23:34:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:13.298 /home/vagrant/spdk_repo/spdk 00:13:13.298 23:34:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:13.298 23:34:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:13.298 23:34:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:13.298 ************************************ 00:13:13.298 END TEST bdev_fio 00:13:13.298 ************************************ 00:13:13.298 00:13:13.298 real 0m12.236s 00:13:13.298 user 0m29.903s 00:13:13.298 sys 0m18.853s 00:13:13.298 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.298 23:34:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:13.298 23:34:01 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:13.298 23:34:01 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:13.298 23:34:01 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:13.298 23:34:01 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.298 23:34:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.298 ************************************ 00:13:13.298 START TEST bdev_verify 00:13:13.298 ************************************ 00:13:13.298 23:34:01 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:13.298 [2024-09-28 23:34:01.246550] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:13.298 [2024-09-28 23:34:01.247243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70696 ] 00:13:13.298 [2024-09-28 23:34:01.401970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:13.559 [2024-09-28 23:34:01.622895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.559 [2024-09-28 23:34:01.622988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.129 Running I/O for 5 seconds... 00:13:19.065 23552.00 IOPS, 92.00 MiB/s 22864.00 IOPS, 89.31 MiB/s 22794.67 IOPS, 89.04 MiB/s 22600.00 IOPS, 88.28 MiB/s 22630.40 IOPS, 88.40 MiB/s 00:13:19.065 Latency(us) 00:13:19.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.065 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.065 Verification LBA range: start 0x0 length 0xa0000 00:13:19.065 nvme0n1 : 5.04 1827.31 7.14 0.00 0.00 69910.06 9880.81 75416.81 00:13:19.065 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.065 Verification LBA range: start 0xa0000 length 0xa0000 00:13:19.065 nvme0n1 : 5.04 1750.86 6.84 0.00 0.00 72972.86 8872.57 72593.72 00:13:19.066 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.066 Verification LBA range: start 0x0 length 0xbd0bd 00:13:19.066 nvme1n1 : 5.07 2299.27 8.98 0.00 0.00 55209.43 6956.90 56461.78 00:13:19.066 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.066 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:19.066 nvme1n1 : 5.07 2145.94 8.38 0.00 0.00 59296.22 5293.29 60494.77 00:13:19.066 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.066 Verification LBA range: start 0x0 length 0x80000 00:13:19.066 nvme2n1 : 5.05 1850.86 7.23 0.00 0.00 68633.90 8418.86 79853.10 00:13:19.066 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.066 Verification LBA range: start 0x80000 length 0x80000 00:13:19.066 nvme2n1 : 5.05 1800.23 7.03 0.00 0.00 70749.22 6553.60 72190.42 00:13:19.066 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.066 Verification LBA range: start 0x0 length 0x80000 00:13:19.066 nvme2n2 : 5.08 1840.36 7.19 0.00 0.00 68870.67 6225.92 81466.29 00:13:19.066 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.066 Verification LBA range: start 0x80000 length 0x80000 00:13:19.066 nvme2n2 : 5.05 1773.47 6.93 0.00 0.00 71652.72 6553.60 72190.42 00:13:19.066 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.066 Verification LBA range: start 0x0 length 0x80000 00:13:19.066 nvme2n3 : 5.08 1839.20 7.18 0.00 0.00 68786.61 6704.84 72593.72 00:13:19.066 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.066 Verification LBA range: start 0x80000 length 0x80000 00:13:19.066 nvme2n3 : 5.07 1768.08 6.91 0.00 0.00 71727.04 12300.60 64931.05 00:13:19.066 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:19.066 Verification LBA range: start 0x0 length 0x20000 00:13:19.066 nvme3n1 : 5.07 1816.90 7.10 0.00 0.00 69561.93 6604.01 68964.04 00:13:19.066 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:19.066 Verification LBA range: start 0x20000 length 0x20000 00:13:19.066 nvme3n1 : 5.07 1765.79 6.90 0.00 0.00 71689.22 4637.93 72190.42 00:13:19.066 =================================================================================================================== 00:13:19.066 Total : 22478.27 87.81 0.00 0.00 67804.98 4637.93 81466.29 00:13:20.011 00:13:20.011 real 0m6.908s 00:13:20.011 user 0m11.181s 00:13:20.011 sys 0m1.349s 00:13:20.011 23:34:08 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.011 ************************************ 00:13:20.011 END TEST bdev_verify 00:13:20.011 ************************************ 00:13:20.011 23:34:08 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:20.011 23:34:08 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:20.012 23:34:08 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:20.012 23:34:08 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.012 23:34:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.012 ************************************ 00:13:20.012 START TEST bdev_verify_big_io 00:13:20.012 ************************************ 00:13:20.012 23:34:08 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:20.273 [2024-09-28 23:34:08.221006] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:20.273 [2024-09-28 23:34:08.221147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70797 ] 00:13:20.273 [2024-09-28 23:34:08.374539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:20.535 [2024-09-28 23:34:08.600951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.535 [2024-09-28 23:34:08.601042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.107 Running I/O for 5 seconds... 00:13:27.263 1168.00 IOPS, 73.00 MiB/s 2410.50 IOPS, 150.66 MiB/s 2933.67 IOPS, 183.35 MiB/s 00:13:27.263 Latency(us) 00:13:27.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.263 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0x0 length 0xa000 00:13:27.263 nvme0n1 : 5.98 112.33 7.02 0.00 0.00 1087525.25 7057.72 1626099.40 00:13:27.263 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0xa000 length 0xa000 00:13:27.263 nvme0n1 : 5.83 109.87 6.87 0.00 0.00 1102480.62 258111.02 1064707.94 00:13:27.263 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0x0 length 0xbd0b 00:13:27.263 nvme1n1 : 6.01 98.55 6.16 0.00 0.00 1224854.90 120182.94 2671449.01 00:13:27.263 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:27.263 nvme1n1 : 5.83 107.00 6.69 0.00 0.00 1121634.58 11342.77 2477865.75 00:13:27.263 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0x0 length 0x8000 00:13:27.263 nvme2n1 : 5.98 149.94 9.37 0.00 0.00 776773.54 140347.86 890483.00 00:13:27.263 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0x8000 length 0x8000 00:13:27.263 nvme2n1 : 5.99 117.47 7.34 0.00 0.00 969642.07 85902.57 1167952.34 00:13:27.263 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0x0 length 0x8000 00:13:27.263 nvme2n2 : 5.99 104.19 6.51 0.00 0.00 1089648.46 88322.36 2697260.11 00:13:27.263 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0x8000 length 0x8000 00:13:27.263 nvme2n2 : 5.98 155.13 9.70 0.00 0.00 716201.14 149220.43 871124.68 00:13:27.263 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0x0 length 0x8000 00:13:27.263 nvme2n3 : 5.98 104.39 6.52 0.00 0.00 1045277.23 108083.99 2181038.08 00:13:27.263 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0x8000 length 0x8000 00:13:27.263 nvme2n3 : 6.00 125.33 7.83 0.00 0.00 882073.97 12149.37 2748882.31 00:13:27.263 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0x0 length 0x2000 00:13:27.263 nvme3n1 : 5.99 181.54 11.35 0.00 0.00 588854.51 8166.79 706578.90 00:13:27.263 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:27.263 Verification LBA range: start 0x2000 length 0x2000 00:13:27.263 nvme3n1 : 6.00 194.60 12.16 0.00 0.00 548947.51 3856.54 774333.05 00:13:27.263 =================================================================================================================== 00:13:27.263 Total : 1560.33 97.52 0.00 0.00 878654.22 3856.54 2748882.31 00:13:28.207 00:13:28.207 real 0m8.066s 00:13:28.207 user 0m14.569s 00:13:28.207 sys 0m0.475s 00:13:28.207 ************************************ 00:13:28.207 END TEST bdev_verify_big_io 00:13:28.207 ************************************ 00:13:28.207 23:34:16 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.207 23:34:16 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.207 23:34:16 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:28.207 23:34:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:28.207 23:34:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.207 23:34:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.207 ************************************ 00:13:28.207 START TEST bdev_write_zeroes 00:13:28.207 ************************************ 00:13:28.207 23:34:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:28.207 [2024-09-28 23:34:16.346989] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:28.207 [2024-09-28 23:34:16.347120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70908 ] 00:13:28.468 [2024-09-28 23:34:16.500134] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.729 [2024-09-28 23:34:16.721224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.991 Running I/O for 1 seconds... 00:13:30.379 84416.00 IOPS, 329.75 MiB/s 00:13:30.379 Latency(us) 00:13:30.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.379 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:30.379 nvme0n1 : 1.03 13462.50 52.59 0.00 0.00 9498.07 5772.21 22887.19 00:13:30.379 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:30.379 nvme1n1 : 1.03 16295.49 63.65 0.00 0.00 7839.11 3856.54 26214.40 00:13:30.379 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:30.379 nvme2n1 : 1.03 13445.42 52.52 0.00 0.00 9492.97 5999.06 25306.98 00:13:30.379 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:30.379 nvme2n2 : 1.03 13430.32 52.46 0.00 0.00 9445.72 5419.32 27021.00 00:13:30.379 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:30.379 nvme2n3 : 1.03 13415.08 52.40 0.00 0.00 9450.63 5494.94 28835.84 00:13:30.379 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:30.379 nvme3n1 : 1.03 13399.95 52.34 0.00 0.00 9452.84 5646.18 30449.03 00:13:30.379 =================================================================================================================== 00:13:30.380 Total : 83448.75 325.97 0.00 0.00 9151.01 3856.54 30449.03 00:13:30.954 00:13:30.954 real 0m2.808s 00:13:30.954 user 0m2.107s 00:13:30.954 sys 0m0.520s 00:13:30.954 23:34:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.954 23:34:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:30.954 ************************************ 00:13:30.954 END TEST bdev_write_zeroes 00:13:30.954 ************************************ 00:13:31.215 23:34:19 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:31.215 23:34:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:31.215 23:34:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.215 23:34:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.215 ************************************ 00:13:31.215 START TEST bdev_json_nonenclosed 00:13:31.215 ************************************ 00:13:31.215 23:34:19 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:31.215 [2024-09-28 23:34:19.221841] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:31.215 [2024-09-28 23:34:19.222080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70963 ] 00:13:31.215 [2024-09-28 23:34:19.367388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.476 [2024-09-28 23:34:19.594908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.476 [2024-09-28 23:34:19.595268] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:31.476 [2024-09-28 23:34:19.595411] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:31.476 [2024-09-28 23:34:19.595438] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:32.049 00:13:32.049 real 0m0.759s 00:13:32.049 user 0m0.543s 00:13:32.049 sys 0m0.108s 00:13:32.049 ************************************ 00:13:32.049 END TEST bdev_json_nonenclosed 00:13:32.049 ************************************ 00:13:32.049 23:34:19 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.049 23:34:19 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:32.049 23:34:19 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.049 23:34:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:32.049 23:34:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.049 23:34:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.049 ************************************ 00:13:32.049 START TEST bdev_json_nonarray 00:13:32.049 ************************************ 00:13:32.049 23:34:19 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:32.049 [2024-09-28 23:34:20.049540] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:32.049 [2024-09-28 23:34:20.049674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70994 ] 00:13:32.049 [2024-09-28 23:34:20.201904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.310 [2024-09-28 23:34:20.428216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.310 [2024-09-28 23:34:20.428647] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:32.310 [2024-09-28 23:34:20.428680] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:32.310 [2024-09-28 23:34:20.428692] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:32.880 00:13:32.880 real 0m0.760s 00:13:32.880 user 0m0.538s 00:13:32.880 sys 0m0.113s 00:13:32.880 23:34:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.880 ************************************ 00:13:32.881 END TEST bdev_json_nonarray 00:13:32.881 ************************************ 00:13:32.881 23:34:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:32.881 23:34:20 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:33.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:35.687 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.687 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.687 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.947 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.947 00:13:35.947 real 0m58.894s 00:13:35.947 user 1m29.732s 00:13:35.947 sys 0m30.388s 00:13:35.947 23:34:24 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:35.947 ************************************ 00:13:35.947 23:34:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.947 END TEST blockdev_xnvme 00:13:35.947 ************************************ 00:13:35.947 23:34:24 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:35.947 23:34:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:35.947 23:34:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.947 23:34:24 -- common/autotest_common.sh@10 -- # set +x 00:13:35.947 ************************************ 00:13:35.947 START TEST ublk 00:13:35.947 ************************************ 00:13:35.947 23:34:24 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:36.207 * Looking for test storage... 00:13:36.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:36.207 23:34:24 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.207 23:34:24 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.207 23:34:24 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.207 23:34:24 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.207 23:34:24 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.207 23:34:24 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.207 23:34:24 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.207 23:34:24 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.207 23:34:24 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.207 23:34:24 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.207 23:34:24 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.207 23:34:24 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:36.207 23:34:24 ublk -- scripts/common.sh@345 -- # : 1 00:13:36.207 23:34:24 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.207 23:34:24 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.207 23:34:24 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:36.207 23:34:24 ublk -- scripts/common.sh@353 -- # local d=1 00:13:36.207 23:34:24 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.207 23:34:24 ublk -- scripts/common.sh@355 -- # echo 1 00:13:36.207 23:34:24 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.207 23:34:24 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:36.207 23:34:24 ublk -- scripts/common.sh@353 -- # local d=2 00:13:36.207 23:34:24 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.207 23:34:24 ublk -- scripts/common.sh@355 -- # echo 2 00:13:36.207 23:34:24 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.207 23:34:24 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.207 23:34:24 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.207 23:34:24 ublk -- scripts/common.sh@368 -- # return 0 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:36.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.207 --rc genhtml_branch_coverage=1 00:13:36.207 --rc genhtml_function_coverage=1 00:13:36.207 --rc genhtml_legend=1 00:13:36.207 --rc geninfo_all_blocks=1 00:13:36.207 --rc geninfo_unexecuted_blocks=1 00:13:36.207 00:13:36.207 ' 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:36.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.207 --rc genhtml_branch_coverage=1 00:13:36.207 --rc genhtml_function_coverage=1 00:13:36.207 --rc genhtml_legend=1 00:13:36.207 --rc geninfo_all_blocks=1 00:13:36.207 --rc geninfo_unexecuted_blocks=1 00:13:36.207 00:13:36.207 ' 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:36.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.207 --rc genhtml_branch_coverage=1 00:13:36.207 --rc genhtml_function_coverage=1 00:13:36.207 --rc genhtml_legend=1 00:13:36.207 --rc geninfo_all_blocks=1 00:13:36.207 --rc geninfo_unexecuted_blocks=1 00:13:36.207 00:13:36.207 ' 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:36.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.207 --rc genhtml_branch_coverage=1 00:13:36.207 --rc genhtml_function_coverage=1 00:13:36.207 --rc genhtml_legend=1 00:13:36.207 --rc geninfo_all_blocks=1 00:13:36.207 --rc geninfo_unexecuted_blocks=1 00:13:36.207 00:13:36.207 ' 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:36.207 23:34:24 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:36.207 23:34:24 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:36.207 23:34:24 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:36.207 23:34:24 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:36.207 23:34:24 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:36.207 23:34:24 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:36.207 23:34:24 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:36.207 23:34:24 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:36.207 23:34:24 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.207 23:34:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:36.207 ************************************ 00:13:36.207 START TEST test_save_ublk_config 00:13:36.207 ************************************ 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:36.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=71284 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 71284 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71284 ']' 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.207 23:34:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:36.207 [2024-09-28 23:34:24.317768] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:36.207 [2024-09-28 23:34:24.318179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71284 ] 00:13:36.468 [2024-09-28 23:34:24.473303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.728 [2024-09-28 23:34:24.641816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.990 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.990 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:36.990 23:34:25 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:36.990 23:34:25 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:36.990 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.990 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:36.990 [2024-09-28 23:34:25.146527] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:36.990 [2024-09-28 23:34:25.147151] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:37.251 malloc0 00:13:37.251 [2024-09-28 23:34:25.194609] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:37.251 [2024-09-28 23:34:25.194668] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:37.251 [2024-09-28 23:34:25.194676] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:37.251 [2024-09-28 23:34:25.194682] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:37.251 [2024-09-28 23:34:25.203580] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:37.251 [2024-09-28 23:34:25.203599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:37.251 [2024-09-28 23:34:25.210533] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:37.251 [2024-09-28 23:34:25.210608] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:37.251 [2024-09-28 23:34:25.227528] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:37.251 0 00:13:37.251 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.251 23:34:25 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:37.251 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.251 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:37.512 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.512 23:34:25 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:37.512 "subsystems": [ 00:13:37.512 { 00:13:37.512 "subsystem": "fsdev", 00:13:37.512 "config": [ 00:13:37.512 { 00:13:37.512 "method": "fsdev_set_opts", 00:13:37.512 "params": { 00:13:37.512 "fsdev_io_pool_size": 65535, 00:13:37.512 "fsdev_io_cache_size": 256 00:13:37.512 } 00:13:37.512 } 00:13:37.512 ] 00:13:37.512 }, 00:13:37.512 { 00:13:37.512 "subsystem": "keyring", 00:13:37.512 "config": [] 00:13:37.512 }, 00:13:37.512 { 00:13:37.512 "subsystem": "iobuf", 00:13:37.512 "config": [ 00:13:37.512 { 00:13:37.512 "method": "iobuf_set_options", 00:13:37.512 "params": { 00:13:37.512 "small_pool_count": 8192, 00:13:37.512 "large_pool_count": 1024, 00:13:37.512 "small_bufsize": 8192, 00:13:37.512 "large_bufsize": 135168 00:13:37.512 } 00:13:37.512 } 00:13:37.512 ] 00:13:37.512 }, 00:13:37.512 { 00:13:37.512 "subsystem": "sock", 00:13:37.512 "config": [ 00:13:37.512 { 00:13:37.512 "method": "sock_set_default_impl", 00:13:37.512 "params": { 00:13:37.512 "impl_name": "posix" 00:13:37.512 } 00:13:37.512 }, 00:13:37.512 { 00:13:37.512 "method": "sock_impl_set_options", 00:13:37.512 "params": { 00:13:37.512 "impl_name": "ssl", 00:13:37.512 "recv_buf_size": 4096, 00:13:37.512 "send_buf_size": 4096, 00:13:37.512 "enable_recv_pipe": true, 00:13:37.512 "enable_quickack": false, 00:13:37.512 "enable_placement_id": 0, 00:13:37.512 "enable_zerocopy_send_server": true, 00:13:37.512 "enable_zerocopy_send_client": false, 00:13:37.512 "zerocopy_threshold": 0, 00:13:37.512 "tls_version": 0, 00:13:37.512 "enable_ktls": false 00:13:37.512 } 00:13:37.512 }, 00:13:37.512 { 00:13:37.512 "method": "sock_impl_set_options", 00:13:37.512 "params": { 00:13:37.512 "impl_name": "posix", 00:13:37.512 "recv_buf_size": 2097152, 00:13:37.512 "send_buf_size": 2097152, 00:13:37.512 "enable_recv_pipe": true, 00:13:37.512 "enable_quickack": false, 00:13:37.512 "enable_placement_id": 0, 00:13:37.512 "enable_zerocopy_send_server": true, 00:13:37.512 "enable_zerocopy_send_client": false, 00:13:37.512 "zerocopy_threshold": 0, 00:13:37.512 "tls_version": 0, 00:13:37.512 "enable_ktls": false 00:13:37.512 } 00:13:37.512 } 00:13:37.512 ] 00:13:37.512 }, 00:13:37.512 { 00:13:37.512 "subsystem": "vmd", 00:13:37.512 "config": [] 00:13:37.512 }, 00:13:37.512 { 00:13:37.512 "subsystem": "accel", 00:13:37.512 "config": [ 00:13:37.513 { 00:13:37.513 "method": "accel_set_options", 00:13:37.513 "params": { 00:13:37.513 "small_cache_size": 128, 00:13:37.513 "large_cache_size": 16, 00:13:37.513 "task_count": 2048, 00:13:37.513 "sequence_count": 2048, 00:13:37.513 "buf_count": 2048 00:13:37.513 } 00:13:37.513 } 00:13:37.513 ] 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "subsystem": "bdev", 00:13:37.513 "config": [ 00:13:37.513 { 00:13:37.513 "method": "bdev_set_options", 00:13:37.513 "params": { 00:13:37.513 "bdev_io_pool_size": 65535, 00:13:37.513 "bdev_io_cache_size": 256, 00:13:37.513 "bdev_auto_examine": true, 00:13:37.513 "iobuf_small_cache_size": 128, 00:13:37.513 "iobuf_large_cache_size": 16 00:13:37.513 } 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "method": "bdev_raid_set_options", 00:13:37.513 "params": { 00:13:37.513 "process_window_size_kb": 1024, 00:13:37.513 "process_max_bandwidth_mb_sec": 0 00:13:37.513 } 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "method": "bdev_iscsi_set_options", 00:13:37.513 "params": { 00:13:37.513 "timeout_sec": 30 00:13:37.513 } 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "method": "bdev_nvme_set_options", 00:13:37.513 "params": { 00:13:37.513 "action_on_timeout": "none", 00:13:37.513 "timeout_us": 0, 00:13:37.513 "timeout_admin_us": 0, 00:13:37.513 "keep_alive_timeout_ms": 10000, 00:13:37.513 "arbitration_burst": 0, 00:13:37.513 "low_priority_weight": 0, 00:13:37.513 "medium_priority_weight": 0, 00:13:37.513 "high_priority_weight": 0, 00:13:37.513 "nvme_adminq_poll_period_us": 10000, 00:13:37.513 "nvme_ioq_poll_period_us": 0, 00:13:37.513 "io_queue_requests": 0, 00:13:37.513 "delay_cmd_submit": true, 00:13:37.513 "transport_retry_count": 4, 00:13:37.513 "bdev_retry_count": 3, 00:13:37.513 "transport_ack_timeout": 0, 00:13:37.513 "ctrlr_loss_timeout_sec": 0, 00:13:37.513 "reconnect_delay_sec": 0, 00:13:37.513 "fast_io_fail_timeout_sec": 0, 00:13:37.513 "disable_auto_failback": false, 00:13:37.513 "generate_uuids": false, 00:13:37.513 "transport_tos": 0, 00:13:37.513 "nvme_error_stat": false, 00:13:37.513 "rdma_srq_size": 0, 00:13:37.513 "io_path_stat": false, 00:13:37.513 "allow_accel_sequence": false, 00:13:37.513 "rdma_max_cq_size": 0, 00:13:37.513 "rdma_cm_event_timeout_ms": 0, 00:13:37.513 "dhchap_digests": [ 00:13:37.513 "sha256", 00:13:37.513 "sha384", 00:13:37.513 "sha512" 00:13:37.513 ], 00:13:37.513 "dhchap_dhgroups": [ 00:13:37.513 "null", 00:13:37.513 "ffdhe2048", 00:13:37.513 "ffdhe3072", 00:13:37.513 "ffdhe4096", 00:13:37.513 "ffdhe6144", 00:13:37.513 "ffdhe8192" 00:13:37.513 ] 00:13:37.513 } 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "method": "bdev_nvme_set_hotplug", 00:13:37.513 "params": { 00:13:37.513 "period_us": 100000, 00:13:37.513 "enable": false 00:13:37.513 } 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "method": "bdev_malloc_create", 00:13:37.513 "params": { 00:13:37.513 "name": "malloc0", 00:13:37.513 "num_blocks": 8192, 00:13:37.513 "block_size": 4096, 00:13:37.513 "physical_block_size": 4096, 00:13:37.513 "uuid": "e02002b5-6698-47e7-8884-094692c28a2b", 00:13:37.513 "optimal_io_boundary": 0, 00:13:37.513 "md_size": 0, 00:13:37.513 "dif_type": 0, 00:13:37.513 "dif_is_head_of_md": false, 00:13:37.513 "dif_pi_format": 0 00:13:37.513 } 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "method": "bdev_wait_for_examine" 00:13:37.513 } 00:13:37.513 ] 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "subsystem": "scsi", 00:13:37.513 "config": null 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "subsystem": "scheduler", 00:13:37.513 "config": [ 00:13:37.513 { 00:13:37.513 "method": "framework_set_scheduler", 00:13:37.513 "params": { 00:13:37.513 "name": "static" 00:13:37.513 } 00:13:37.513 } 00:13:37.513 ] 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "subsystem": "vhost_scsi", 00:13:37.513 "config": [] 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "subsystem": "vhost_blk", 00:13:37.513 "config": [] 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "subsystem": "ublk", 00:13:37.513 "config": [ 00:13:37.513 { 00:13:37.513 "method": "ublk_create_target", 00:13:37.513 "params": { 00:13:37.513 "cpumask": "1" 00:13:37.513 } 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "method": "ublk_start_disk", 00:13:37.513 "params": { 00:13:37.513 "bdev_name": "malloc0", 00:13:37.513 "ublk_id": 0, 00:13:37.513 "num_queues": 1, 00:13:37.513 "queue_depth": 128 00:13:37.513 } 00:13:37.513 } 00:13:37.513 ] 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "subsystem": "nbd", 00:13:37.513 "config": [] 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "subsystem": "nvmf", 00:13:37.513 "config": [ 00:13:37.513 { 00:13:37.513 "method": "nvmf_set_config", 00:13:37.513 "params": { 00:13:37.513 "discovery_filter": "match_any", 00:13:37.513 "admin_cmd_passthru": { 00:13:37.513 "identify_ctrlr": false 00:13:37.513 }, 00:13:37.513 "dhchap_digests": [ 00:13:37.513 "sha256", 00:13:37.513 "sha384", 00:13:37.513 "sha512" 00:13:37.513 ], 00:13:37.513 "dhchap_dhgroups": [ 00:13:37.513 "null", 00:13:37.513 "ffdhe2048", 00:13:37.513 "ffdhe3072", 00:13:37.513 "ffdhe4096", 00:13:37.513 "ffdhe6144", 00:13:37.513 "ffdhe8192" 00:13:37.513 ] 00:13:37.513 } 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "method": "nvmf_set_max_subsystems", 00:13:37.513 "params": { 00:13:37.513 "max_subsystems": 1024 00:13:37.513 } 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "method": "nvmf_set_crdt", 00:13:37.513 "params": { 00:13:37.513 "crdt1": 0, 00:13:37.513 "crdt2": 0, 00:13:37.513 "crdt3": 0 00:13:37.513 } 00:13:37.513 } 00:13:37.513 ] 00:13:37.513 }, 00:13:37.513 { 00:13:37.513 "subsystem": "iscsi", 00:13:37.513 "config": [ 00:13:37.513 { 00:13:37.513 "method": "iscsi_set_options", 00:13:37.513 "params": { 00:13:37.513 "node_base": "iqn.2016-06.io.spdk", 00:13:37.513 "max_sessions": 128, 00:13:37.513 "max_connections_per_session": 2, 00:13:37.513 "max_queue_depth": 64, 00:13:37.513 "default_time2wait": 2, 00:13:37.513 "default_time2retain": 20, 00:13:37.513 "first_burst_length": 8192, 00:13:37.513 "immediate_data": true, 00:13:37.513 "allow_duplicated_isid": false, 00:13:37.513 "error_recovery_level": 0, 00:13:37.513 "nop_timeout": 60, 00:13:37.513 "nop_in_interval": 30, 00:13:37.513 "disable_chap": false, 00:13:37.513 "require_chap": false, 00:13:37.513 "mutual_chap": false, 00:13:37.513 "chap_group": 0, 00:13:37.513 "max_large_datain_per_connection": 64, 00:13:37.513 "max_r2t_per_connection": 4, 00:13:37.513 "pdu_pool_size": 36864, 00:13:37.513 "immediate_data_pool_size": 16384, 00:13:37.513 "data_out_pool_size": 2048 00:13:37.513 } 00:13:37.513 } 00:13:37.513 ] 00:13:37.513 } 00:13:37.513 ] 00:13:37.513 }' 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 71284 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71284 ']' 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71284 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71284 00:13:37.513 killing process with pid 71284 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71284' 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71284 00:13:37.513 23:34:25 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71284 00:13:38.456 [2024-09-28 23:34:26.421925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:38.456 [2024-09-28 23:34:26.461541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:38.456 [2024-09-28 23:34:26.461640] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:38.456 [2024-09-28 23:34:26.470536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:38.456 [2024-09-28 23:34:26.470578] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:38.456 [2024-09-28 23:34:26.470586] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:38.456 [2024-09-28 23:34:26.470605] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:38.456 [2024-09-28 23:34:26.470711] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:39.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.842 23:34:27 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71333 00:13:39.842 23:34:27 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71333 00:13:39.842 23:34:27 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71333 ']' 00:13:39.842 23:34:27 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.842 23:34:27 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.842 23:34:27 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.842 23:34:27 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:39.842 23:34:27 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.842 23:34:27 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:39.842 "subsystems": [ 00:13:39.842 { 00:13:39.842 "subsystem": "fsdev", 00:13:39.842 "config": [ 00:13:39.842 { 00:13:39.842 "method": "fsdev_set_opts", 00:13:39.842 "params": { 00:13:39.842 "fsdev_io_pool_size": 65535, 00:13:39.842 "fsdev_io_cache_size": 256 00:13:39.842 } 00:13:39.842 } 00:13:39.842 ] 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "subsystem": "keyring", 00:13:39.842 "config": [] 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "subsystem": "iobuf", 00:13:39.842 "config": [ 00:13:39.842 { 00:13:39.842 "method": "iobuf_set_options", 00:13:39.842 "params": { 00:13:39.842 "small_pool_count": 8192, 00:13:39.842 "large_pool_count": 1024, 00:13:39.842 "small_bufsize": 8192, 00:13:39.842 "large_bufsize": 135168 00:13:39.842 } 00:13:39.842 } 00:13:39.842 ] 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "subsystem": "sock", 00:13:39.842 "config": [ 00:13:39.842 { 00:13:39.842 "method": "sock_set_default_impl", 00:13:39.842 "params": { 00:13:39.842 "impl_name": "posix" 00:13:39.842 } 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "method": "sock_impl_set_options", 00:13:39.842 "params": { 00:13:39.842 "impl_name": "ssl", 00:13:39.842 "recv_buf_size": 4096, 00:13:39.842 "send_buf_size": 4096, 00:13:39.842 "enable_recv_pipe": true, 00:13:39.842 "enable_quickack": false, 00:13:39.842 "enable_placement_id": 0, 00:13:39.842 "enable_zerocopy_send_server": true, 00:13:39.842 "enable_zerocopy_send_client": false, 00:13:39.842 "zerocopy_threshold": 0, 00:13:39.842 "tls_version": 0, 00:13:39.842 "enable_ktls": false 00:13:39.842 } 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "method": "sock_impl_set_options", 00:13:39.842 "params": { 00:13:39.842 "impl_name": "posix", 00:13:39.842 "recv_buf_size": 2097152, 00:13:39.842 "send_buf_size": 2097152, 00:13:39.842 "enable_recv_pipe": true, 00:13:39.842 "enable_quickack": false, 00:13:39.842 "enable_placement_id": 0, 00:13:39.842 "enable_zerocopy_send_server": true, 00:13:39.842 "enable_zerocopy_send_client": false, 00:13:39.842 "zerocopy_threshold": 0, 00:13:39.842 "tls_version": 0, 00:13:39.842 "enable_ktls": false 00:13:39.842 } 00:13:39.842 } 00:13:39.842 ] 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "subsystem": "vmd", 00:13:39.842 "config": [] 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "subsystem": "accel", 00:13:39.842 "config": [ 00:13:39.842 { 00:13:39.842 "method": "accel_set_options", 00:13:39.842 "params": { 00:13:39.842 "small_cache_size": 128, 00:13:39.842 "large_cache_size": 16, 00:13:39.842 "task_count": 2048, 00:13:39.842 "sequence_count": 2048, 00:13:39.842 "buf_count": 2048 00:13:39.842 } 00:13:39.842 } 00:13:39.842 ] 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "subsystem": "bdev", 00:13:39.842 "config": [ 00:13:39.842 { 00:13:39.842 "method": "bdev_set_options", 00:13:39.842 "params": { 00:13:39.842 "bdev_io_pool_size": 65535, 00:13:39.842 "bdev_io_cache_size": 256, 00:13:39.842 "bdev_auto_examine": true, 00:13:39.842 "iobuf_small_cache_size": 128, 00:13:39.842 "iobuf_large_cache_size": 16 00:13:39.842 } 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "method": "bdev_raid_set_options", 00:13:39.842 "params": { 00:13:39.842 "process_window_size_kb": 1024, 00:13:39.842 "process_max_bandwidth_mb_sec": 0 00:13:39.842 } 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "method": "bdev_iscsi_set_options", 00:13:39.842 "params": { 00:13:39.842 "timeout_sec": 30 00:13:39.842 } 00:13:39.842 }, 00:13:39.842 { 00:13:39.842 "method": "bdev_nvme_set_options", 00:13:39.842 "params": { 00:13:39.842 "action_on_timeout": "none", 00:13:39.842 "timeout_us": 0, 00:13:39.842 "timeout_admin_us": 0, 00:13:39.842 "keep_alive_timeout_ms": 10000, 00:13:39.842 "arbitration_burst": 0, 00:13:39.842 "low_priority_weight": 0, 00:13:39.842 "medium_priority_weight": 0, 00:13:39.842 "high_priority_weight": 0, 00:13:39.842 "nvme_adminq_poll_period_us": 10000, 00:13:39.842 "nvme_ioq_poll_period_us": 0, 00:13:39.842 "io_queue_requests": 0, 00:13:39.842 "delay_cmd_submit": true, 00:13:39.842 "transport_retry_count": 4, 00:13:39.842 "bdev_retry_count": 3, 00:13:39.842 "transport_ack_timeout": 0, 00:13:39.842 "ctrlr_loss_timeout_sec": 0, 00:13:39.842 "reconnect_delay_sec": 0, 00:13:39.842 "fast_io_fail_timeout_sec": 0, 00:13:39.842 "disable_auto_failback": false, 00:13:39.842 "generate_uuids": false, 00:13:39.842 "transport_tos": 0, 00:13:39.842 "nvme_error_stat": false, 00:13:39.842 "rdma_srq_size": 0, 00:13:39.842 "io_path_stat": false, 00:13:39.842 "allow_accel_sequence": false, 00:13:39.842 "rdma_max_cq_size": 0, 00:13:39.842 "rdma_cm_event_timeout_ms": 0, 00:13:39.842 "dhchap_digests": [ 00:13:39.842 "sha256", 00:13:39.842 "sha384", 00:13:39.842 "sha512" 00:13:39.842 ], 00:13:39.842 "dhchap_dhgroups": [ 00:13:39.842 "null", 00:13:39.842 "ffdhe2048", 00:13:39.842 "ffdhe3072", 00:13:39.842 "ffdhe4096", 00:13:39.843 "ffdhe6144", 00:13:39.843 "ffdhe8192" 00:13:39.843 ] 00:13:39.843 } 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "method": "bdev_nvme_set_hotplug", 00:13:39.843 "params": { 00:13:39.843 "period_us": 100000, 00:13:39.843 "enable": false 00:13:39.843 } 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "method": "bdev_malloc_create", 00:13:39.843 "params": { 00:13:39.843 "name": "malloc0", 00:13:39.843 "num_blocks": 8192, 00:13:39.843 "block_size": 4096, 00:13:39.843 "physical_block_size": 4096, 00:13:39.843 "uuid": "e02002b5-6698-47e7-8884-094692c28a2b", 00:13:39.843 "optimal_io_boundary": 0, 00:13:39.843 "md_size": 0, 00:13:39.843 "dif_type": 0, 00:13:39.843 "dif_is_head_of_md": false, 00:13:39.843 "dif_pi_format": 0 00:13:39.843 } 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "method": "bdev_wait_for_examine" 00:13:39.843 } 00:13:39.843 ] 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "subsystem": "scsi", 00:13:39.843 "config": null 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "subsystem": "scheduler", 00:13:39.843 "config": [ 00:13:39.843 { 00:13:39.843 "method": "framework_set_scheduler", 00:13:39.843 "params": { 00:13:39.843 "name": "static" 00:13:39.843 } 00:13:39.843 } 00:13:39.843 ] 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "subsystem": "vhost_scsi", 00:13:39.843 "config": [] 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "subsystem": "vhost_blk", 00:13:39.843 "config": [] 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "subsystem": "ublk", 00:13:39.843 "config": [ 00:13:39.843 { 00:13:39.843 "method": "ublk_create_target", 00:13:39.843 "params": { 00:13:39.843 "cpumask": "1" 00:13:39.843 } 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "method": "ublk_start_disk", 00:13:39.843 "params": { 00:13:39.843 "bdev_name": "malloc0", 00:13:39.843 "ublk_id": 0, 00:13:39.843 "num_queues": 1, 00:13:39.843 "queue_depth": 128 00:13:39.843 } 00:13:39.843 } 00:13:39.843 ] 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "subsystem": "nbd", 00:13:39.843 "config": [] 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "subsystem": "nvmf", 00:13:39.843 "config": [ 00:13:39.843 { 00:13:39.843 "method": "nvmf_set_config", 00:13:39.843 "params": { 00:13:39.843 "discovery_filter": "match_any", 00:13:39.843 "admin_cmd_passthru": { 00:13:39.843 "identify_ctrlr": false 00:13:39.843 }, 00:13:39.843 "dhchap_digests": [ 00:13:39.843 "sha256", 00:13:39.843 "sha384", 00:13:39.843 "sha512" 00:13:39.843 ], 00:13:39.843 "dhchap_dhgroups": [ 00:13:39.843 "null", 00:13:39.843 "ffdhe2048", 00:13:39.843 "ffdhe3072", 00:13:39.843 "ffdhe4096", 00:13:39.843 "ffdhe6144", 00:13:39.843 "ffdhe8192" 00:13:39.843 ] 00:13:39.843 } 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "method": 23:34:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:39.843 "nvmf_set_max_subsystems", 00:13:39.843 "params": { 00:13:39.843 "max_subsystems": 1024 00:13:39.843 } 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "method": "nvmf_set_crdt", 00:13:39.843 "params": { 00:13:39.843 "crdt1": 0, 00:13:39.843 "crdt2": 0, 00:13:39.843 "crdt3": 0 00:13:39.843 } 00:13:39.843 } 00:13:39.843 ] 00:13:39.843 }, 00:13:39.843 { 00:13:39.843 "subsystem": "iscsi", 00:13:39.843 "config": [ 00:13:39.843 { 00:13:39.843 "method": "iscsi_set_options", 00:13:39.843 "params": { 00:13:39.843 "node_base": "iqn.2016-06.io.spdk", 00:13:39.843 "max_sessions": 128, 00:13:39.843 "max_connections_per_session": 2, 00:13:39.843 "max_queue_depth": 64, 00:13:39.843 "default_time2wait": 2, 00:13:39.843 "default_time2retain": 20, 00:13:39.843 "first_burst_length": 8192, 00:13:39.843 "immediate_data": true, 00:13:39.843 "allow_duplicated_isid": false, 00:13:39.843 "error_recovery_level": 0, 00:13:39.843 "nop_timeout": 60, 00:13:39.843 "nop_in_interval": 30, 00:13:39.843 "disable_chap": false, 00:13:39.843 "require_chap": false, 00:13:39.843 "mutual_chap": false, 00:13:39.843 "chap_group": 0, 00:13:39.843 "max_large_datain_per_connection": 64, 00:13:39.843 "max_r2t_per_connection": 4, 00:13:39.843 "pdu_pool_size": 36864, 00:13:39.843 "immediate_data_pool_size": 16384, 00:13:39.843 "data_out_pool_size": 2048 00:13:39.843 } 00:13:39.843 } 00:13:39.843 ] 00:13:39.843 } 00:13:39.843 ] 00:13:39.843 }' 00:13:39.843 [2024-09-28 23:34:27.890549] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:39.843 [2024-09-28 23:34:27.890850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71333 ] 00:13:40.104 [2024-09-28 23:34:28.040265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.104 [2024-09-28 23:34:28.220098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.048 [2024-09-28 23:34:28.851523] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:41.048 [2024-09-28 23:34:28.852149] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:41.048 [2024-09-28 23:34:28.859612] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:41.048 [2024-09-28 23:34:28.859670] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:41.048 [2024-09-28 23:34:28.859676] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:41.048 [2024-09-28 23:34:28.859681] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:41.048 [2024-09-28 23:34:28.868577] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:41.048 [2024-09-28 23:34:28.868595] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:41.048 [2024-09-28 23:34:28.875528] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:41.048 [2024-09-28 23:34:28.875596] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:41.048 [2024-09-28 23:34:28.892533] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71333 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71333 ']' 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71333 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71333 00:13:41.048 killing process with pid 71333 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71333' 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71333 00:13:41.048 23:34:28 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71333 00:13:41.991 [2024-09-28 23:34:29.978971] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:41.991 [2024-09-28 23:34:30.024612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:41.991 [2024-09-28 23:34:30.024732] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:41.991 [2024-09-28 23:34:30.032546] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:41.991 [2024-09-28 23:34:30.032597] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:41.991 [2024-09-28 23:34:30.032604] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:41.991 [2024-09-28 23:34:30.032635] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:41.991 [2024-09-28 23:34:30.032779] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:43.379 23:34:31 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:43.379 00:13:43.379 real 0m7.090s 00:13:43.379 user 0m4.863s 00:13:43.379 sys 0m2.854s 00:13:43.379 23:34:31 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.379 23:34:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:43.379 ************************************ 00:13:43.379 END TEST test_save_ublk_config 00:13:43.379 ************************************ 00:13:43.379 23:34:31 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71406 00:13:43.379 23:34:31 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:43.379 23:34:31 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:43.379 23:34:31 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71406 00:13:43.379 23:34:31 ublk -- common/autotest_common.sh@831 -- # '[' -z 71406 ']' 00:13:43.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.379 23:34:31 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.379 23:34:31 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.379 23:34:31 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.379 23:34:31 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.379 23:34:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.379 [2024-09-28 23:34:31.443910] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:43.379 [2024-09-28 23:34:31.444754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71406 ] 00:13:43.641 [2024-09-28 23:34:31.597118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:43.641 [2024-09-28 23:34:31.736813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.641 [2024-09-28 23:34:31.736894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.213 23:34:32 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.213 23:34:32 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:44.213 23:34:32 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:44.213 23:34:32 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:44.213 23:34:32 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.213 23:34:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.213 ************************************ 00:13:44.213 START TEST test_create_ublk 00:13:44.213 ************************************ 00:13:44.213 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:44.213 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:44.213 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.213 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.213 [2024-09-28 23:34:32.282527] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:44.213 [2024-09-28 23:34:32.283695] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:44.213 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.213 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:44.213 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:44.213 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.213 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.475 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.475 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:44.475 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:44.475 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.475 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.475 [2024-09-28 23:34:32.449636] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:44.475 [2024-09-28 23:34:32.449931] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:44.475 [2024-09-28 23:34:32.449940] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:44.475 [2024-09-28 23:34:32.449945] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:44.475 [2024-09-28 23:34:32.457540] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:44.475 [2024-09-28 23:34:32.457558] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:44.475 [2024-09-28 23:34:32.465539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:44.475 [2024-09-28 23:34:32.466021] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:44.475 [2024-09-28 23:34:32.488532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:44.475 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.475 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:44.475 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:44.475 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:44.475 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.475 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:44.475 23:34:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.475 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:44.475 { 00:13:44.475 "ublk_device": "/dev/ublkb0", 00:13:44.475 "id": 0, 00:13:44.475 "queue_depth": 512, 00:13:44.475 "num_queues": 4, 00:13:44.475 "bdev_name": "Malloc0" 00:13:44.475 } 00:13:44.475 ]' 00:13:44.476 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:44.476 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:44.476 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:44.476 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:44.476 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:44.476 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:44.476 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:44.476 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:44.476 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:44.737 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:44.737 23:34:32 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:44.737 23:34:32 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:44.737 fio: verification read phase will never start because write phase uses all of runtime 00:13:44.737 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:44.737 fio-3.35 00:13:44.737 Starting 1 process 00:13:54.768 00:13:54.768 fio_test: (groupid=0, jobs=1): err= 0: pid=71451: Sat Sep 28 23:34:42 2024 00:13:54.768 write: IOPS=19.7k, BW=77.1MiB/s (80.9MB/s)(772MiB/10001msec); 0 zone resets 00:13:54.768 clat (usec): min=33, max=4103, avg=49.79, stdev=81.97 00:13:54.768 lat (usec): min=34, max=4103, avg=50.27, stdev=81.99 00:13:54.768 clat percentiles (usec): 00:13:54.768 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 00:13:54.768 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 47], 00:13:54.768 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 57], 95.00th=[ 63], 00:13:54.768 | 99.00th=[ 73], 99.50th=[ 83], 99.90th=[ 1336], 99.95th=[ 2474], 00:13:54.768 | 99.99th=[ 3359] 00:13:54.768 bw ( KiB/s): min=68360, max=89960, per=99.93%, avg=78944.68, stdev=4774.91, samples=19 00:13:54.768 iops : min=17090, max=22490, avg=19736.16, stdev=1193.76, samples=19 00:13:54.768 lat (usec) : 50=80.97%, 100=18.68%, 250=0.17%, 500=0.04%, 750=0.01% 00:13:54.768 lat (usec) : 1000=0.01% 00:13:54.768 lat (msec) : 2=0.04%, 4=0.07%, 10=0.01% 00:13:54.768 cpu : usr=3.82%, sys=16.64%, ctx=197496, majf=0, minf=796 00:13:54.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:54.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.768 issued rwts: total=0,197512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:54.768 00:13:54.768 Run status group 0 (all jobs): 00:13:54.768 WRITE: bw=77.1MiB/s (80.9MB/s), 77.1MiB/s-77.1MiB/s (80.9MB/s-80.9MB/s), io=772MiB (809MB), run=10001-10001msec 00:13:54.768 00:13:54.768 Disk stats (read/write): 00:13:54.768 ublkb0: ios=0/195320, merge=0/0, ticks=0/7939, in_queue=7940, util=99.10% 00:13:54.768 23:34:42 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:54.768 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.768 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:54.768 [2024-09-28 23:34:42.913051] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.028 [2024-09-28 23:34:42.948970] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.028 [2024-09-28 23:34:42.949865] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.028 [2024-09-28 23:34:42.956541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.028 [2024-09-28 23:34:42.956765] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:55.028 [2024-09-28 23:34:42.956775] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.028 23:34:42 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.028 [2024-09-28 23:34:42.972585] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:55.028 request: 00:13:55.028 { 00:13:55.028 "ublk_id": 0, 00:13:55.028 "method": "ublk_stop_disk", 00:13:55.028 "req_id": 1 00:13:55.028 } 00:13:55.028 Got JSON-RPC error response 00:13:55.028 response: 00:13:55.028 { 00:13:55.028 "code": -19, 00:13:55.028 "message": "No such device" 00:13:55.028 } 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.028 23:34:42 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.028 [2024-09-28 23:34:42.988580] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:55.028 [2024-09-28 23:34:42.990397] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:55.028 [2024-09-28 23:34:42.990428] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.028 23:34:42 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.028 23:34:42 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.288 23:34:43 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.288 23:34:43 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:55.288 23:34:43 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:55.288 23:34:43 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.288 23:34:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.288 23:34:43 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.288 23:34:43 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:55.288 23:34:43 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:55.288 23:34:43 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:55.288 23:34:43 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:55.288 23:34:43 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.288 23:34:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.288 23:34:43 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.288 23:34:43 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:55.288 23:34:43 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:55.288 ************************************ 00:13:55.288 END TEST test_create_ublk 00:13:55.288 ************************************ 00:13:55.288 23:34:43 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:55.288 00:13:55.288 real 0m11.162s 00:13:55.288 user 0m0.684s 00:13:55.288 sys 0m1.743s 00:13:55.288 23:34:43 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.288 23:34:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.548 23:34:43 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:55.548 23:34:43 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:55.548 23:34:43 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.548 23:34:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.548 ************************************ 00:13:55.548 START TEST test_create_multi_ublk 00:13:55.548 ************************************ 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.548 [2024-09-28 23:34:43.480522] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:55.548 [2024-09-28 23:34:43.481634] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.548 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.549 [2024-09-28 23:34:43.692628] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:55.549 [2024-09-28 23:34:43.692922] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:55.549 [2024-09-28 23:34:43.692928] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:55.549 [2024-09-28 23:34:43.692936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:55.549 [2024-09-28 23:34:43.704562] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:55.549 [2024-09-28 23:34:43.704583] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:55.808 [2024-09-28 23:34:43.716531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:55.808 [2024-09-28 23:34:43.717019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:55.808 [2024-09-28 23:34:43.743536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.808 23:34:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.808 [2024-09-28 23:34:43.973623] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:55.808 [2024-09-28 23:34:43.973913] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:55.808 [2024-09-28 23:34:43.973922] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:55.808 [2024-09-28 23:34:43.973933] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:56.067 [2024-09-28 23:34:43.981536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:56.067 [2024-09-28 23:34:43.981548] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:56.067 [2024-09-28 23:34:43.989535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:56.067 [2024-09-28 23:34:43.990019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:56.067 [2024-09-28 23:34:44.006535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.067 [2024-09-28 23:34:44.165627] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:56.067 [2024-09-28 23:34:44.165923] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:56.067 [2024-09-28 23:34:44.165935] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:56.067 [2024-09-28 23:34:44.165941] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:56.067 [2024-09-28 23:34:44.173537] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:56.067 [2024-09-28 23:34:44.173557] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:56.067 [2024-09-28 23:34:44.181528] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:56.067 [2024-09-28 23:34:44.182027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:56.067 [2024-09-28 23:34:44.190552] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.067 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.326 [2024-09-28 23:34:44.349625] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:56.326 [2024-09-28 23:34:44.349906] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:56.326 [2024-09-28 23:34:44.349919] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:56.326 [2024-09-28 23:34:44.349924] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:56.326 [2024-09-28 23:34:44.357550] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:56.326 [2024-09-28 23:34:44.357567] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:56.326 [2024-09-28 23:34:44.365537] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:56.326 [2024-09-28 23:34:44.366019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:56.326 [2024-09-28 23:34:44.370191] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:56.326 { 00:13:56.326 "ublk_device": "/dev/ublkb0", 00:13:56.326 "id": 0, 00:13:56.326 "queue_depth": 512, 00:13:56.326 "num_queues": 4, 00:13:56.326 "bdev_name": "Malloc0" 00:13:56.326 }, 00:13:56.326 { 00:13:56.326 "ublk_device": "/dev/ublkb1", 00:13:56.326 "id": 1, 00:13:56.326 "queue_depth": 512, 00:13:56.326 "num_queues": 4, 00:13:56.326 "bdev_name": "Malloc1" 00:13:56.326 }, 00:13:56.326 { 00:13:56.326 "ublk_device": "/dev/ublkb2", 00:13:56.326 "id": 2, 00:13:56.326 "queue_depth": 512, 00:13:56.326 "num_queues": 4, 00:13:56.326 "bdev_name": "Malloc2" 00:13:56.326 }, 00:13:56.326 { 00:13:56.326 "ublk_device": "/dev/ublkb3", 00:13:56.326 "id": 3, 00:13:56.326 "queue_depth": 512, 00:13:56.326 "num_queues": 4, 00:13:56.326 "bdev_name": "Malloc3" 00:13:56.326 } 00:13:56.326 ]' 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:56.326 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:56.584 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:56.843 23:34:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.101 [2024-09-28 23:34:45.033607] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:57.101 [2024-09-28 23:34:45.073531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:57.101 [2024-09-28 23:34:45.074261] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:57.101 [2024-09-28 23:34:45.081535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:57.101 [2024-09-28 23:34:45.081758] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:57.101 [2024-09-28 23:34:45.081770] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.101 [2024-09-28 23:34:45.097578] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:57.101 [2024-09-28 23:34:45.137532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:57.101 [2024-09-28 23:34:45.138207] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:57.101 [2024-09-28 23:34:45.145532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:57.101 [2024-09-28 23:34:45.145745] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:57.101 [2024-09-28 23:34:45.145757] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.101 [2024-09-28 23:34:45.161583] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:57.101 [2024-09-28 23:34:45.201939] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:57.101 [2024-09-28 23:34:45.202905] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:57.101 [2024-09-28 23:34:45.209532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:57.101 [2024-09-28 23:34:45.209741] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:57.101 [2024-09-28 23:34:45.209754] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.101 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.101 [2024-09-28 23:34:45.225588] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:57.101 [2024-09-28 23:34:45.264923] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:57.101 [2024-09-28 23:34:45.265866] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:57.359 [2024-09-28 23:34:45.273532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:57.359 [2024-09-28 23:34:45.273740] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:57.359 [2024-09-28 23:34:45.273753] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:57.359 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.359 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:57.359 [2024-09-28 23:34:45.465579] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:57.359 [2024-09-28 23:34:45.467387] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:57.359 [2024-09-28 23:34:45.467415] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:57.359 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:57.359 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.359 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:57.359 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.359 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:57.924 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.925 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:57.925 23:34:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:57.925 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.925 23:34:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.182 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.182 23:34:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.182 23:34:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:58.182 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.182 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:58.439 23:34:46 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:58.696 23:34:46 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:58.696 23:34:46 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:58.696 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.696 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.696 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.696 23:34:46 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:58.696 23:34:46 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:58.696 23:34:46 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:58.696 00:13:58.696 real 0m3.192s 00:13:58.696 user 0m0.810s 00:13:58.696 sys 0m0.148s 00:13:58.696 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.696 23:34:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:58.696 ************************************ 00:13:58.696 END TEST test_create_multi_ublk 00:13:58.696 ************************************ 00:13:58.696 23:34:46 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:58.696 23:34:46 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:58.696 23:34:46 ublk -- ublk/ublk.sh@130 -- # killprocess 71406 00:13:58.696 23:34:46 ublk -- common/autotest_common.sh@950 -- # '[' -z 71406 ']' 00:13:58.696 23:34:46 ublk -- common/autotest_common.sh@954 -- # kill -0 71406 00:13:58.696 23:34:46 ublk -- common/autotest_common.sh@955 -- # uname 00:13:58.696 23:34:46 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.696 23:34:46 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71406 00:13:58.696 23:34:46 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.696 23:34:46 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.696 killing process with pid 71406 00:13:58.696 23:34:46 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71406' 00:13:58.696 23:34:46 ublk -- common/autotest_common.sh@969 -- # kill 71406 00:13:58.696 23:34:46 ublk -- common/autotest_common.sh@974 -- # wait 71406 00:13:59.261 [2024-09-28 23:34:47.240961] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:59.261 [2024-09-28 23:34:47.241007] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:59.829 00:13:59.829 real 0m23.892s 00:13:59.829 user 0m34.503s 00:13:59.829 sys 0m9.561s 00:13:59.829 23:34:47 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.829 23:34:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.829 ************************************ 00:13:59.829 END TEST ublk 00:13:59.829 ************************************ 00:13:59.829 23:34:47 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:59.829 23:34:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:59.829 23:34:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.829 23:34:47 -- common/autotest_common.sh@10 -- # set +x 00:14:00.087 ************************************ 00:14:00.087 START TEST ublk_recovery 00:14:00.087 ************************************ 00:14:00.087 23:34:48 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:00.087 * Looking for test storage... 00:14:00.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:00.087 23:34:48 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:00.087 23:34:48 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:00.087 23:34:48 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:00.087 23:34:48 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.087 23:34:48 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:00.087 23:34:48 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.087 23:34:48 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:00.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.087 --rc genhtml_branch_coverage=1 00:14:00.087 --rc genhtml_function_coverage=1 00:14:00.087 --rc genhtml_legend=1 00:14:00.087 --rc geninfo_all_blocks=1 00:14:00.087 --rc geninfo_unexecuted_blocks=1 00:14:00.087 00:14:00.087 ' 00:14:00.087 23:34:48 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:00.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.087 --rc genhtml_branch_coverage=1 00:14:00.087 --rc genhtml_function_coverage=1 00:14:00.087 --rc genhtml_legend=1 00:14:00.087 --rc geninfo_all_blocks=1 00:14:00.087 --rc geninfo_unexecuted_blocks=1 00:14:00.087 00:14:00.087 ' 00:14:00.087 23:34:48 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:00.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.087 --rc genhtml_branch_coverage=1 00:14:00.087 --rc genhtml_function_coverage=1 00:14:00.087 --rc genhtml_legend=1 00:14:00.087 --rc geninfo_all_blocks=1 00:14:00.087 --rc geninfo_unexecuted_blocks=1 00:14:00.087 00:14:00.087 ' 00:14:00.087 23:34:48 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:00.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.088 --rc genhtml_branch_coverage=1 00:14:00.088 --rc genhtml_function_coverage=1 00:14:00.088 --rc genhtml_legend=1 00:14:00.088 --rc geninfo_all_blocks=1 00:14:00.088 --rc geninfo_unexecuted_blocks=1 00:14:00.088 00:14:00.088 ' 00:14:00.088 23:34:48 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:00.088 23:34:48 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:00.088 23:34:48 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:00.088 23:34:48 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:00.088 23:34:48 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:00.088 23:34:48 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:00.088 23:34:48 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:00.088 23:34:48 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:00.088 23:34:48 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:00.088 23:34:48 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:00.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.088 23:34:48 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71799 00:14:00.088 23:34:48 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:00.088 23:34:48 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71799 00:14:00.088 23:34:48 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:00.088 23:34:48 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71799 ']' 00:14:00.088 23:34:48 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.088 23:34:48 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.088 23:34:48 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.088 23:34:48 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.088 23:34:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.088 [2024-09-28 23:34:48.238702] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:00.088 [2024-09-28 23:34:48.238825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71799 ] 00:14:00.345 [2024-09-28 23:34:48.383377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:00.603 [2024-09-28 23:34:48.525899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.603 [2024-09-28 23:34:48.525994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:01.168 23:34:49 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.168 [2024-09-28 23:34:49.072525] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:01.168 [2024-09-28 23:34:49.073716] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.168 23:34:49 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.168 malloc0 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.168 23:34:49 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.168 [2024-09-28 23:34:49.152630] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:01.168 [2024-09-28 23:34:49.152708] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:01.168 [2024-09-28 23:34:49.152717] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:01.168 [2024-09-28 23:34:49.152722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:01.168 [2024-09-28 23:34:49.161600] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:01.168 [2024-09-28 23:34:49.161619] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:01.168 [2024-09-28 23:34:49.168530] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:01.168 [2024-09-28 23:34:49.168638] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:01.168 [2024-09-28 23:34:49.191531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:01.168 1 00:14:01.168 23:34:49 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.168 23:34:49 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:02.104 23:34:50 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71834 00:14:02.104 23:34:50 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:02.104 23:34:50 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:02.365 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:02.365 fio-3.35 00:14:02.365 Starting 1 process 00:14:07.654 23:34:55 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71799 00:14:07.654 23:34:55 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:12.972 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71799 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:12.972 23:35:00 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71945 00:14:12.972 23:35:00 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:12.972 23:35:00 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:12.972 23:35:00 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71945 00:14:12.972 23:35:00 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71945 ']' 00:14:12.972 23:35:00 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.972 23:35:00 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.972 23:35:00 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.973 23:35:00 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.973 23:35:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:12.973 [2024-09-28 23:35:00.303557] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:12.973 [2024-09-28 23:35:00.303976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71945 ] 00:14:12.973 [2024-09-28 23:35:00.453072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:12.973 [2024-09-28 23:35:00.691993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.973 [2024-09-28 23:35:00.692079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.238 23:35:01 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.238 23:35:01 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:13.238 23:35:01 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:13.238 23:35:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.238 23:35:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.238 [2024-09-28 23:35:01.301528] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:13.238 [2024-09-28 23:35:01.303062] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:13.238 23:35:01 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.238 23:35:01 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:13.238 23:35:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.238 23:35:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.238 malloc0 00:14:13.238 23:35:01 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.238 23:35:01 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:13.238 23:35:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.238 23:35:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.499 [2024-09-28 23:35:01.405652] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:13.499 [2024-09-28 23:35:01.405692] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:13.499 [2024-09-28 23:35:01.405701] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:13.499 [2024-09-28 23:35:01.413567] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:13.499 [2024-09-28 23:35:01.413589] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:13.499 [2024-09-28 23:35:01.413598] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:13.499 [2024-09-28 23:35:01.413673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:13.499 1 00:14:13.499 23:35:01 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.499 23:35:01 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71834 00:14:13.499 [2024-09-28 23:35:01.421532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:13.499 [2024-09-28 23:35:01.424132] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:13.499 [2024-09-28 23:35:01.429711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:13.499 [2024-09-28 23:35:01.429731] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:09.760 00:15:09.760 fio_test: (groupid=0, jobs=1): err= 0: pid=71837: Sat Sep 28 23:35:50 2024 00:15:09.760 read: IOPS=28.4k, BW=111MiB/s (116MB/s)(6656MiB/60001msec) 00:15:09.760 slat (nsec): min=920, max=973006, avg=4833.37, stdev=1624.32 00:15:09.760 clat (usec): min=879, max=6232.3k, avg=2208.01, stdev=37879.81 00:15:09.760 lat (usec): min=881, max=6232.3k, avg=2212.84, stdev=37879.81 00:15:09.760 clat percentiles (usec): 00:15:09.760 | 1.00th=[ 1631], 5.00th=[ 1745], 10.00th=[ 1778], 20.00th=[ 1795], 00:15:09.760 | 30.00th=[ 1811], 40.00th=[ 1827], 50.00th=[ 1844], 60.00th=[ 1860], 00:15:09.760 | 70.00th=[ 1876], 80.00th=[ 1893], 90.00th=[ 1991], 95.00th=[ 2835], 00:15:09.760 | 99.00th=[ 4752], 99.50th=[ 5407], 99.90th=[ 6652], 99.95th=[ 7177], 00:15:09.760 | 99.99th=[12780] 00:15:09.760 bw ( KiB/s): min=48152, max=132968, per=100.00%, avg=126288.30, stdev=12874.57, samples=107 00:15:09.760 iops : min=12038, max=33242, avg=31572.07, stdev=3218.64, samples=107 00:15:09.760 write: IOPS=28.4k, BW=111MiB/s (116MB/s)(6651MiB/60001msec); 0 zone resets 00:15:09.760 slat (nsec): min=925, max=259830, avg=4866.89, stdev=1506.98 00:15:09.760 clat (usec): min=904, max=6232.2k, avg=2290.50, stdev=38492.10 00:15:09.760 lat (usec): min=911, max=6232.2k, avg=2295.37, stdev=38492.10 00:15:09.760 clat percentiles (usec): 00:15:09.760 | 1.00th=[ 1663], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1893], 00:15:09.760 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1926], 60.00th=[ 1942], 00:15:09.760 | 70.00th=[ 1958], 80.00th=[ 1991], 90.00th=[ 2057], 95.00th=[ 2835], 00:15:09.760 | 99.00th=[ 4752], 99.50th=[ 5538], 99.90th=[ 6587], 99.95th=[ 7242], 00:15:09.760 | 99.99th=[12649] 00:15:09.760 bw ( KiB/s): min=48504, max=132072, per=100.00%, avg=126196.86, stdev=12869.65, samples=107 00:15:09.760 iops : min=12126, max=33018, avg=31549.21, stdev=3217.41, samples=107 00:15:09.760 lat (usec) : 1000=0.01% 00:15:09.760 lat (msec) : 2=87.29%, 4=10.40%, 10=2.30%, 20=0.01%, >=2000=0.01% 00:15:09.760 cpu : usr=6.26%, sys=28.29%, ctx=115518, majf=0, minf=13 00:15:09.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:09.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.760 issued rwts: total=1703911,1702549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.760 00:15:09.760 Run status group 0 (all jobs): 00:15:09.760 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=6656MiB (6979MB), run=60001-60001msec 00:15:09.760 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=6651MiB (6974MB), run=60001-60001msec 00:15:09.760 00:15:09.760 Disk stats (read/write): 00:15:09.760 ublkb1: ios=1700491/1699097, merge=0/0, ticks=3670457/3672827, in_queue=7343285, util=99.90% 00:15:09.760 23:35:50 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.760 [2024-09-28 23:35:50.452070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:09.760 [2024-09-28 23:35:50.491545] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:09.760 [2024-09-28 23:35:50.491767] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:09.760 [2024-09-28 23:35:50.499537] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:09.760 [2024-09-28 23:35:50.499620] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:09.760 [2024-09-28 23:35:50.499627] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.760 23:35:50 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.760 [2024-09-28 23:35:50.515599] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:09.760 [2024-09-28 23:35:50.517402] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:09.760 [2024-09-28 23:35:50.517429] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.760 23:35:50 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:09.760 23:35:50 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:09.760 23:35:50 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71945 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71945 ']' 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71945 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71945 00:15:09.760 killing process with pid 71945 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71945' 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71945 00:15:09.760 23:35:50 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71945 00:15:09.760 [2024-09-28 23:35:51.579097] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:09.760 [2024-09-28 23:35:51.579141] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:09.760 ************************************ 00:15:09.760 END TEST ublk_recovery 00:15:09.760 ************************************ 00:15:09.760 00:15:09.760 real 1m4.346s 00:15:09.760 user 1m42.045s 00:15:09.760 sys 0m36.380s 00:15:09.760 23:35:52 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:09.760 23:35:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.760 23:35:52 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:09.760 23:35:52 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:09.761 23:35:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:09.761 23:35:52 -- common/autotest_common.sh@10 -- # set +x 00:15:09.761 23:35:52 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:09.761 23:35:52 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:09.761 23:35:52 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:09.761 23:35:52 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:09.761 23:35:52 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:09.761 23:35:52 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:09.761 23:35:52 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:09.761 23:35:52 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:09.761 23:35:52 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:09.761 23:35:52 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:09.761 23:35:52 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:09.761 23:35:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:09.761 23:35:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:09.761 23:35:52 -- common/autotest_common.sh@10 -- # set +x 00:15:09.761 ************************************ 00:15:09.761 START TEST ftl 00:15:09.761 ************************************ 00:15:09.761 23:35:52 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:09.761 * Looking for test storage... 00:15:09.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.761 23:35:52 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:09.761 23:35:52 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:09.761 23:35:52 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:09.761 23:35:52 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:09.761 23:35:52 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.761 23:35:52 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.761 23:35:52 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.761 23:35:52 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.761 23:35:52 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.761 23:35:52 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.761 23:35:52 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.761 23:35:52 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.761 23:35:52 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.761 23:35:52 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.761 23:35:52 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.761 23:35:52 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:09.761 23:35:52 ftl -- scripts/common.sh@345 -- # : 1 00:15:09.761 23:35:52 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.761 23:35:52 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.761 23:35:52 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:09.761 23:35:52 ftl -- scripts/common.sh@353 -- # local d=1 00:15:09.761 23:35:52 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.761 23:35:52 ftl -- scripts/common.sh@355 -- # echo 1 00:15:09.761 23:35:52 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.761 23:35:52 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:09.761 23:35:52 ftl -- scripts/common.sh@353 -- # local d=2 00:15:09.761 23:35:52 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.761 23:35:52 ftl -- scripts/common.sh@355 -- # echo 2 00:15:09.761 23:35:52 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.761 23:35:52 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.761 23:35:52 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.761 23:35:52 ftl -- scripts/common.sh@368 -- # return 0 00:15:09.761 23:35:52 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.761 23:35:52 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:09.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.761 --rc genhtml_branch_coverage=1 00:15:09.761 --rc genhtml_function_coverage=1 00:15:09.761 --rc genhtml_legend=1 00:15:09.761 --rc geninfo_all_blocks=1 00:15:09.761 --rc geninfo_unexecuted_blocks=1 00:15:09.761 00:15:09.761 ' 00:15:09.761 23:35:52 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:09.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.761 --rc genhtml_branch_coverage=1 00:15:09.761 --rc genhtml_function_coverage=1 00:15:09.761 --rc genhtml_legend=1 00:15:09.761 --rc geninfo_all_blocks=1 00:15:09.761 --rc geninfo_unexecuted_blocks=1 00:15:09.761 00:15:09.761 ' 00:15:09.761 23:35:52 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:09.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.761 --rc genhtml_branch_coverage=1 00:15:09.761 --rc genhtml_function_coverage=1 00:15:09.761 --rc genhtml_legend=1 00:15:09.761 --rc geninfo_all_blocks=1 00:15:09.761 --rc geninfo_unexecuted_blocks=1 00:15:09.761 00:15:09.761 ' 00:15:09.761 23:35:52 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:09.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.761 --rc genhtml_branch_coverage=1 00:15:09.761 --rc genhtml_function_coverage=1 00:15:09.761 --rc genhtml_legend=1 00:15:09.761 --rc geninfo_all_blocks=1 00:15:09.761 --rc geninfo_unexecuted_blocks=1 00:15:09.761 00:15:09.761 ' 00:15:09.761 23:35:52 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:09.761 23:35:52 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:09.761 23:35:52 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.761 23:35:52 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.761 23:35:52 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:09.761 23:35:52 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:09.761 23:35:52 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.761 23:35:52 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:09.761 23:35:52 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:09.761 23:35:52 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.761 23:35:52 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.761 23:35:52 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:09.761 23:35:52 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:09.761 23:35:52 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.761 23:35:52 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.761 23:35:52 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:09.761 23:35:52 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:09.761 23:35:52 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.761 23:35:52 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.761 23:35:52 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:09.761 23:35:52 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:09.761 23:35:52 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.761 23:35:52 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.761 23:35:52 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.761 23:35:52 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.761 23:35:52 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:09.761 23:35:52 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:09.761 23:35:52 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.761 23:35:52 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.761 23:35:52 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.761 23:35:52 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:09.761 23:35:52 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:09.761 23:35:52 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:09.761 23:35:52 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:09.761 23:35:52 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:09.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:09.761 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:09.761 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:09.761 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:09.761 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:09.761 23:35:53 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72750 00:15:09.761 23:35:53 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:09.761 23:35:53 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72750 00:15:09.761 23:35:53 ftl -- common/autotest_common.sh@831 -- # '[' -z 72750 ']' 00:15:09.761 23:35:53 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.761 23:35:53 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.761 23:35:53 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.761 23:35:53 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.761 23:35:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:09.761 [2024-09-28 23:35:53.123820] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:09.761 [2024-09-28 23:35:53.124105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72750 ] 00:15:09.761 [2024-09-28 23:35:53.265989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.761 [2024-09-28 23:35:53.442264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.761 23:35:53 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.761 23:35:53 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:09.761 23:35:53 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:09.761 23:35:54 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:09.761 23:35:54 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:09.761 23:35:54 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:09.761 23:35:55 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:09.761 23:35:55 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:09.761 23:35:55 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:09.761 23:35:55 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:09.761 23:35:55 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@50 -- # break 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@63 -- # break 00:15:09.762 23:35:55 ftl -- ftl/ftl.sh@66 -- # killprocess 72750 00:15:09.762 23:35:55 ftl -- common/autotest_common.sh@950 -- # '[' -z 72750 ']' 00:15:09.762 23:35:55 ftl -- common/autotest_common.sh@954 -- # kill -0 72750 00:15:09.762 23:35:55 ftl -- common/autotest_common.sh@955 -- # uname 00:15:09.762 23:35:55 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.762 23:35:55 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72750 00:15:09.762 killing process with pid 72750 00:15:09.762 23:35:55 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:09.762 23:35:55 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:09.762 23:35:55 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72750' 00:15:09.762 23:35:55 ftl -- common/autotest_common.sh@969 -- # kill 72750 00:15:09.762 23:35:55 ftl -- common/autotest_common.sh@974 -- # wait 72750 00:15:09.762 23:35:56 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:09.762 23:35:56 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:09.762 23:35:56 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:09.762 23:35:56 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:09.762 23:35:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:09.762 ************************************ 00:15:09.762 START TEST ftl_fio_basic 00:15:09.762 ************************************ 00:15:09.762 23:35:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:09.762 * Looking for test storage... 00:15:09.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:09.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.762 --rc genhtml_branch_coverage=1 00:15:09.762 --rc genhtml_function_coverage=1 00:15:09.762 --rc genhtml_legend=1 00:15:09.762 --rc geninfo_all_blocks=1 00:15:09.762 --rc geninfo_unexecuted_blocks=1 00:15:09.762 00:15:09.762 ' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:09.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.762 --rc genhtml_branch_coverage=1 00:15:09.762 --rc genhtml_function_coverage=1 00:15:09.762 --rc genhtml_legend=1 00:15:09.762 --rc geninfo_all_blocks=1 00:15:09.762 --rc geninfo_unexecuted_blocks=1 00:15:09.762 00:15:09.762 ' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:09.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.762 --rc genhtml_branch_coverage=1 00:15:09.762 --rc genhtml_function_coverage=1 00:15:09.762 --rc genhtml_legend=1 00:15:09.762 --rc geninfo_all_blocks=1 00:15:09.762 --rc geninfo_unexecuted_blocks=1 00:15:09.762 00:15:09.762 ' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:09.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.762 --rc genhtml_branch_coverage=1 00:15:09.762 --rc genhtml_function_coverage=1 00:15:09.762 --rc genhtml_legend=1 00:15:09.762 --rc geninfo_all_blocks=1 00:15:09.762 --rc geninfo_unexecuted_blocks=1 00:15:09.762 00:15:09.762 ' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:09.762 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72882 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72882 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72882 ']' 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.763 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:09.763 [2024-09-28 23:35:57.184052] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:09.763 [2024-09-28 23:35:57.184364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72882 ] 00:15:09.763 [2024-09-28 23:35:57.335931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.763 [2024-09-28 23:35:57.474441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.763 [2024-09-28 23:35:57.474681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.763 [2024-09-28 23:35:57.474642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.020 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.020 23:35:57 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:10.020 23:35:57 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:10.020 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:10.021 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:10.021 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:10.021 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:10.021 23:35:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:10.278 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:10.278 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:10.278 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:10.278 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:10.278 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:10.278 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:10.278 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:10.278 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:10.278 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:10.278 { 00:15:10.278 "name": "nvme0n1", 00:15:10.278 "aliases": [ 00:15:10.278 "195158ce-d055-4c32-ab92-73bbeb897106" 00:15:10.278 ], 00:15:10.278 "product_name": "NVMe disk", 00:15:10.278 "block_size": 4096, 00:15:10.278 "num_blocks": 1310720, 00:15:10.278 "uuid": "195158ce-d055-4c32-ab92-73bbeb897106", 00:15:10.278 "numa_id": -1, 00:15:10.278 "assigned_rate_limits": { 00:15:10.278 "rw_ios_per_sec": 0, 00:15:10.278 "rw_mbytes_per_sec": 0, 00:15:10.278 "r_mbytes_per_sec": 0, 00:15:10.278 "w_mbytes_per_sec": 0 00:15:10.278 }, 00:15:10.278 "claimed": false, 00:15:10.278 "zoned": false, 00:15:10.278 "supported_io_types": { 00:15:10.278 "read": true, 00:15:10.278 "write": true, 00:15:10.278 "unmap": true, 00:15:10.278 "flush": true, 00:15:10.279 "reset": true, 00:15:10.279 "nvme_admin": true, 00:15:10.279 "nvme_io": true, 00:15:10.279 "nvme_io_md": false, 00:15:10.279 "write_zeroes": true, 00:15:10.279 "zcopy": false, 00:15:10.279 "get_zone_info": false, 00:15:10.279 "zone_management": false, 00:15:10.279 "zone_append": false, 00:15:10.279 "compare": true, 00:15:10.279 "compare_and_write": false, 00:15:10.279 "abort": true, 00:15:10.279 "seek_hole": false, 00:15:10.279 "seek_data": false, 00:15:10.279 "copy": true, 00:15:10.279 "nvme_iov_md": false 00:15:10.279 }, 00:15:10.279 "driver_specific": { 00:15:10.279 "nvme": [ 00:15:10.279 { 00:15:10.279 "pci_address": "0000:00:11.0", 00:15:10.279 "trid": { 00:15:10.279 "trtype": "PCIe", 00:15:10.279 "traddr": "0000:00:11.0" 00:15:10.279 }, 00:15:10.279 "ctrlr_data": { 00:15:10.279 "cntlid": 0, 00:15:10.279 "vendor_id": "0x1b36", 00:15:10.279 "model_number": "QEMU NVMe Ctrl", 00:15:10.279 "serial_number": "12341", 00:15:10.279 "firmware_revision": "8.0.0", 00:15:10.279 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:10.279 "oacs": { 00:15:10.279 "security": 0, 00:15:10.279 "format": 1, 00:15:10.279 "firmware": 0, 00:15:10.279 "ns_manage": 1 00:15:10.279 }, 00:15:10.279 "multi_ctrlr": false, 00:15:10.279 "ana_reporting": false 00:15:10.279 }, 00:15:10.279 "vs": { 00:15:10.279 "nvme_version": "1.4" 00:15:10.279 }, 00:15:10.279 "ns_data": { 00:15:10.279 "id": 1, 00:15:10.279 "can_share": false 00:15:10.279 } 00:15:10.279 } 00:15:10.279 ], 00:15:10.279 "mp_policy": "active_passive" 00:15:10.279 } 00:15:10.279 } 00:15:10.279 ]' 00:15:10.279 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:10.538 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:10.798 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=35cac374-e891-4857-a608-1ca5cec34c37 00:15:10.798 23:35:58 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 35cac374-e891-4857-a608-1ca5cec34c37 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:11.059 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:11.320 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:11.320 { 00:15:11.320 "name": "a29b946e-20f5-4fe1-bb21-aa81a961f09d", 00:15:11.320 "aliases": [ 00:15:11.320 "lvs/nvme0n1p0" 00:15:11.320 ], 00:15:11.320 "product_name": "Logical Volume", 00:15:11.320 "block_size": 4096, 00:15:11.320 "num_blocks": 26476544, 00:15:11.320 "uuid": "a29b946e-20f5-4fe1-bb21-aa81a961f09d", 00:15:11.320 "assigned_rate_limits": { 00:15:11.320 "rw_ios_per_sec": 0, 00:15:11.320 "rw_mbytes_per_sec": 0, 00:15:11.320 "r_mbytes_per_sec": 0, 00:15:11.320 "w_mbytes_per_sec": 0 00:15:11.320 }, 00:15:11.320 "claimed": false, 00:15:11.320 "zoned": false, 00:15:11.320 "supported_io_types": { 00:15:11.320 "read": true, 00:15:11.320 "write": true, 00:15:11.320 "unmap": true, 00:15:11.320 "flush": false, 00:15:11.320 "reset": true, 00:15:11.320 "nvme_admin": false, 00:15:11.320 "nvme_io": false, 00:15:11.320 "nvme_io_md": false, 00:15:11.320 "write_zeroes": true, 00:15:11.320 "zcopy": false, 00:15:11.320 "get_zone_info": false, 00:15:11.320 "zone_management": false, 00:15:11.320 "zone_append": false, 00:15:11.320 "compare": false, 00:15:11.320 "compare_and_write": false, 00:15:11.320 "abort": false, 00:15:11.320 "seek_hole": true, 00:15:11.320 "seek_data": true, 00:15:11.320 "copy": false, 00:15:11.320 "nvme_iov_md": false 00:15:11.320 }, 00:15:11.320 "driver_specific": { 00:15:11.320 "lvol": { 00:15:11.320 "lvol_store_uuid": "35cac374-e891-4857-a608-1ca5cec34c37", 00:15:11.321 "base_bdev": "nvme0n1", 00:15:11.321 "thin_provision": true, 00:15:11.321 "num_allocated_clusters": 0, 00:15:11.321 "snapshot": false, 00:15:11.321 "clone": false, 00:15:11.321 "esnap_clone": false 00:15:11.321 } 00:15:11.321 } 00:15:11.321 } 00:15:11.321 ]' 00:15:11.321 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:11.321 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:11.321 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:11.321 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:11.321 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:11.321 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:11.321 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:11.321 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:11.321 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:11.581 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:11.581 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:11.581 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:11.581 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:11.581 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:11.581 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:11.581 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:11.581 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:11.841 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:11.841 { 00:15:11.841 "name": "a29b946e-20f5-4fe1-bb21-aa81a961f09d", 00:15:11.841 "aliases": [ 00:15:11.841 "lvs/nvme0n1p0" 00:15:11.841 ], 00:15:11.841 "product_name": "Logical Volume", 00:15:11.841 "block_size": 4096, 00:15:11.841 "num_blocks": 26476544, 00:15:11.841 "uuid": "a29b946e-20f5-4fe1-bb21-aa81a961f09d", 00:15:11.841 "assigned_rate_limits": { 00:15:11.841 "rw_ios_per_sec": 0, 00:15:11.841 "rw_mbytes_per_sec": 0, 00:15:11.841 "r_mbytes_per_sec": 0, 00:15:11.841 "w_mbytes_per_sec": 0 00:15:11.841 }, 00:15:11.841 "claimed": false, 00:15:11.841 "zoned": false, 00:15:11.841 "supported_io_types": { 00:15:11.841 "read": true, 00:15:11.841 "write": true, 00:15:11.841 "unmap": true, 00:15:11.841 "flush": false, 00:15:11.841 "reset": true, 00:15:11.841 "nvme_admin": false, 00:15:11.841 "nvme_io": false, 00:15:11.841 "nvme_io_md": false, 00:15:11.841 "write_zeroes": true, 00:15:11.841 "zcopy": false, 00:15:11.841 "get_zone_info": false, 00:15:11.841 "zone_management": false, 00:15:11.841 "zone_append": false, 00:15:11.841 "compare": false, 00:15:11.841 "compare_and_write": false, 00:15:11.841 "abort": false, 00:15:11.841 "seek_hole": true, 00:15:11.841 "seek_data": true, 00:15:11.841 "copy": false, 00:15:11.841 "nvme_iov_md": false 00:15:11.841 }, 00:15:11.841 "driver_specific": { 00:15:11.841 "lvol": { 00:15:11.841 "lvol_store_uuid": "35cac374-e891-4857-a608-1ca5cec34c37", 00:15:11.841 "base_bdev": "nvme0n1", 00:15:11.841 "thin_provision": true, 00:15:11.841 "num_allocated_clusters": 0, 00:15:11.841 "snapshot": false, 00:15:11.841 "clone": false, 00:15:11.841 "esnap_clone": false 00:15:11.841 } 00:15:11.841 } 00:15:11.841 } 00:15:11.841 ]' 00:15:11.841 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:11.841 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:11.841 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:11.841 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:11.841 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:11.842 23:35:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:11.842 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:11.842 23:35:59 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:12.101 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a29b946e-20f5-4fe1-bb21-aa81a961f09d 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:12.101 { 00:15:12.101 "name": "a29b946e-20f5-4fe1-bb21-aa81a961f09d", 00:15:12.101 "aliases": [ 00:15:12.101 "lvs/nvme0n1p0" 00:15:12.101 ], 00:15:12.101 "product_name": "Logical Volume", 00:15:12.101 "block_size": 4096, 00:15:12.101 "num_blocks": 26476544, 00:15:12.101 "uuid": "a29b946e-20f5-4fe1-bb21-aa81a961f09d", 00:15:12.101 "assigned_rate_limits": { 00:15:12.101 "rw_ios_per_sec": 0, 00:15:12.101 "rw_mbytes_per_sec": 0, 00:15:12.101 "r_mbytes_per_sec": 0, 00:15:12.101 "w_mbytes_per_sec": 0 00:15:12.101 }, 00:15:12.101 "claimed": false, 00:15:12.101 "zoned": false, 00:15:12.101 "supported_io_types": { 00:15:12.101 "read": true, 00:15:12.101 "write": true, 00:15:12.101 "unmap": true, 00:15:12.101 "flush": false, 00:15:12.101 "reset": true, 00:15:12.101 "nvme_admin": false, 00:15:12.101 "nvme_io": false, 00:15:12.101 "nvme_io_md": false, 00:15:12.101 "write_zeroes": true, 00:15:12.101 "zcopy": false, 00:15:12.101 "get_zone_info": false, 00:15:12.101 "zone_management": false, 00:15:12.101 "zone_append": false, 00:15:12.101 "compare": false, 00:15:12.101 "compare_and_write": false, 00:15:12.101 "abort": false, 00:15:12.101 "seek_hole": true, 00:15:12.101 "seek_data": true, 00:15:12.101 "copy": false, 00:15:12.101 "nvme_iov_md": false 00:15:12.101 }, 00:15:12.101 "driver_specific": { 00:15:12.101 "lvol": { 00:15:12.101 "lvol_store_uuid": "35cac374-e891-4857-a608-1ca5cec34c37", 00:15:12.101 "base_bdev": "nvme0n1", 00:15:12.101 "thin_provision": true, 00:15:12.101 "num_allocated_clusters": 0, 00:15:12.101 "snapshot": false, 00:15:12.101 "clone": false, 00:15:12.101 "esnap_clone": false 00:15:12.101 } 00:15:12.101 } 00:15:12.101 } 00:15:12.101 ]' 00:15:12.101 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:12.361 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:12.361 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:12.361 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:12.361 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:12.361 23:36:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:12.361 23:36:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:12.361 23:36:00 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:12.361 23:36:00 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a29b946e-20f5-4fe1-bb21-aa81a961f09d -c nvc0n1p0 --l2p_dram_limit 60 00:15:12.361 [2024-09-28 23:36:00.504678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.361 [2024-09-28 23:36:00.504725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:12.361 [2024-09-28 23:36:00.504744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:12.361 [2024-09-28 23:36:00.504761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.361 [2024-09-28 23:36:00.504820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.361 [2024-09-28 23:36:00.504831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:12.361 [2024-09-28 23:36:00.504843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:15:12.361 [2024-09-28 23:36:00.504850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.361 [2024-09-28 23:36:00.504890] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:12.361 [2024-09-28 23:36:00.505631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:12.361 [2024-09-28 23:36:00.505659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.361 [2024-09-28 23:36:00.505671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:12.361 [2024-09-28 23:36:00.505686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:15:12.361 [2024-09-28 23:36:00.505698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.361 [2024-09-28 23:36:00.505747] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6b530286-c748-446d-97f2-aae22a10637f 00:15:12.361 [2024-09-28 23:36:00.506950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.361 [2024-09-28 23:36:00.507079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:12.361 [2024-09-28 23:36:00.507102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:15:12.361 [2024-09-28 23:36:00.507117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.361 [2024-09-28 23:36:00.512263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.361 [2024-09-28 23:36:00.512297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:12.361 [2024-09-28 23:36:00.512307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.021 ms 00:15:12.361 [2024-09-28 23:36:00.512318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.361 [2024-09-28 23:36:00.512412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.361 [2024-09-28 23:36:00.512424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:12.361 [2024-09-28 23:36:00.512433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:15:12.361 [2024-09-28 23:36:00.512445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.361 [2024-09-28 23:36:00.512502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.361 [2024-09-28 23:36:00.512538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:12.361 [2024-09-28 23:36:00.512549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:12.361 [2024-09-28 23:36:00.512558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.361 [2024-09-28 23:36:00.512587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:12.361 [2024-09-28 23:36:00.516133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.361 [2024-09-28 23:36:00.516164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:12.361 [2024-09-28 23:36:00.516175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.552 ms 00:15:12.361 [2024-09-28 23:36:00.516183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.361 [2024-09-28 23:36:00.516218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.361 [2024-09-28 23:36:00.516226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:12.361 [2024-09-28 23:36:00.516236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:12.361 [2024-09-28 23:36:00.516243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.361 [2024-09-28 23:36:00.516281] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:12.361 [2024-09-28 23:36:00.516427] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:12.362 [2024-09-28 23:36:00.516442] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:12.362 [2024-09-28 23:36:00.516453] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:12.362 [2024-09-28 23:36:00.516464] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:12.362 [2024-09-28 23:36:00.516475] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:12.362 [2024-09-28 23:36:00.516484] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:12.362 [2024-09-28 23:36:00.516492] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:12.362 [2024-09-28 23:36:00.516502] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:12.362 [2024-09-28 23:36:00.516521] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:12.362 [2024-09-28 23:36:00.516531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.362 [2024-09-28 23:36:00.516539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:12.362 [2024-09-28 23:36:00.516550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:15:12.362 [2024-09-28 23:36:00.516558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.362 [2024-09-28 23:36:00.516652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.362 [2024-09-28 23:36:00.516661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:12.362 [2024-09-28 23:36:00.516672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:15:12.362 [2024-09-28 23:36:00.516679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.362 [2024-09-28 23:36:00.516839] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:12.362 [2024-09-28 23:36:00.516857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:12.362 [2024-09-28 23:36:00.516871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:12.362 [2024-09-28 23:36:00.516883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.362 [2024-09-28 23:36:00.516898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:12.362 [2024-09-28 23:36:00.516910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:12.362 [2024-09-28 23:36:00.516924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:12.362 [2024-09-28 23:36:00.516942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:12.362 [2024-09-28 23:36:00.516956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:12.362 [2024-09-28 23:36:00.516966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:12.362 [2024-09-28 23:36:00.516978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:12.362 [2024-09-28 23:36:00.516987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:12.362 [2024-09-28 23:36:00.516998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:12.362 [2024-09-28 23:36:00.517007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:12.362 [2024-09-28 23:36:00.517018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:12.362 [2024-09-28 23:36:00.517027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.362 [2024-09-28 23:36:00.517041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:12.362 [2024-09-28 23:36:00.517053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:12.362 [2024-09-28 23:36:00.517069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.362 [2024-09-28 23:36:00.517085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:12.362 [2024-09-28 23:36:00.517100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:12.362 [2024-09-28 23:36:00.517110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:12.362 [2024-09-28 23:36:00.517122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:12.362 [2024-09-28 23:36:00.517133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:12.362 [2024-09-28 23:36:00.517146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:12.362 [2024-09-28 23:36:00.517156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:12.362 [2024-09-28 23:36:00.517169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:12.362 [2024-09-28 23:36:00.517179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:12.362 [2024-09-28 23:36:00.517192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:12.362 [2024-09-28 23:36:00.517207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:12.362 [2024-09-28 23:36:00.517221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:12.362 [2024-09-28 23:36:00.517238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:12.362 [2024-09-28 23:36:00.517254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:12.362 [2024-09-28 23:36:00.517266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:12.362 [2024-09-28 23:36:00.517279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:12.362 [2024-09-28 23:36:00.517290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:12.362 [2024-09-28 23:36:00.517303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:12.362 [2024-09-28 23:36:00.517314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:12.362 [2024-09-28 23:36:00.517327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:12.362 [2024-09-28 23:36:00.517353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.362 [2024-09-28 23:36:00.517368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:12.362 [2024-09-28 23:36:00.517380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:12.362 [2024-09-28 23:36:00.517399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.362 [2024-09-28 23:36:00.517409] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:12.362 [2024-09-28 23:36:00.517434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:12.362 [2024-09-28 23:36:00.517451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:12.362 [2024-09-28 23:36:00.517468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:12.362 [2024-09-28 23:36:00.517485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:12.362 [2024-09-28 23:36:00.517500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:12.362 [2024-09-28 23:36:00.517517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:12.362 [2024-09-28 23:36:00.517531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:12.362 [2024-09-28 23:36:00.517542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:12.362 [2024-09-28 23:36:00.517556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:12.362 [2024-09-28 23:36:00.517572] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:12.362 [2024-09-28 23:36:00.517589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:12.362 [2024-09-28 23:36:00.517598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:12.362 [2024-09-28 23:36:00.517607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:12.362 [2024-09-28 23:36:00.517615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:12.362 [2024-09-28 23:36:00.517624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:12.362 [2024-09-28 23:36:00.517631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:12.362 [2024-09-28 23:36:00.517639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:12.362 [2024-09-28 23:36:00.517651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:12.362 [2024-09-28 23:36:00.517659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:12.362 [2024-09-28 23:36:00.517667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:12.362 [2024-09-28 23:36:00.517679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:12.362 [2024-09-28 23:36:00.517686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:12.362 [2024-09-28 23:36:00.517695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:12.362 [2024-09-28 23:36:00.517702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:12.363 [2024-09-28 23:36:00.517711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:12.363 [2024-09-28 23:36:00.517717] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:12.363 [2024-09-28 23:36:00.517727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:12.363 [2024-09-28 23:36:00.517735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:12.363 [2024-09-28 23:36:00.517744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:12.363 [2024-09-28 23:36:00.517751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:12.363 [2024-09-28 23:36:00.517761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:12.363 [2024-09-28 23:36:00.517769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:12.363 [2024-09-28 23:36:00.517778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:12.363 [2024-09-28 23:36:00.517785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:15:12.363 [2024-09-28 23:36:00.517794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:12.363 [2024-09-28 23:36:00.517859] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:12.363 [2024-09-28 23:36:00.517873] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:14.897 [2024-09-28 23:36:02.785616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.897 [2024-09-28 23:36:02.785676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:14.897 [2024-09-28 23:36:02.785691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2267.752 ms 00:15:14.897 [2024-09-28 23:36:02.785701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.897 [2024-09-28 23:36:02.823328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.897 [2024-09-28 23:36:02.823582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:14.897 [2024-09-28 23:36:02.823610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.417 ms 00:15:14.897 [2024-09-28 23:36:02.823625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.823824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.823847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:14.898 [2024-09-28 23:36:02.823860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:15:14.898 [2024-09-28 23:36:02.823876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.854733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.854769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:14.898 [2024-09-28 23:36:02.854780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.784 ms 00:15:14.898 [2024-09-28 23:36:02.854789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.854822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.854833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:14.898 [2024-09-28 23:36:02.854843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:14.898 [2024-09-28 23:36:02.854852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.855170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.855187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:14.898 [2024-09-28 23:36:02.855196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:15:14.898 [2024-09-28 23:36:02.855205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.855326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.855337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:14.898 [2024-09-28 23:36:02.855345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:15:14.898 [2024-09-28 23:36:02.855355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.869371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.869402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:14.898 [2024-09-28 23:36:02.869412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.987 ms 00:15:14.898 [2024-09-28 23:36:02.869423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.880660] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:14.898 [2024-09-28 23:36:02.894414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.894443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:14.898 [2024-09-28 23:36:02.894456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.906 ms 00:15:14.898 [2024-09-28 23:36:02.894466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.946253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.946290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:14.898 [2024-09-28 23:36:02.946304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.749 ms 00:15:14.898 [2024-09-28 23:36:02.946314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.946493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.946503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:14.898 [2024-09-28 23:36:02.946533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:15:14.898 [2024-09-28 23:36:02.946542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.969390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.969420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:14.898 [2024-09-28 23:36:02.969433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.797 ms 00:15:14.898 [2024-09-28 23:36:02.969441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.991869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.991899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:14.898 [2024-09-28 23:36:02.991912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.388 ms 00:15:14.898 [2024-09-28 23:36:02.991919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:02.992479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:02.992500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:14.898 [2024-09-28 23:36:02.992524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:15:14.898 [2024-09-28 23:36:02.992532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.898 [2024-09-28 23:36:03.062037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.898 [2024-09-28 23:36:03.062081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:14.898 [2024-09-28 23:36:03.062099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.461 ms 00:15:14.898 [2024-09-28 23:36:03.062107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.159 [2024-09-28 23:36:03.086032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.159 [2024-09-28 23:36:03.086068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:15.159 [2024-09-28 23:36:03.086084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.829 ms 00:15:15.159 [2024-09-28 23:36:03.086092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.159 [2024-09-28 23:36:03.109239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.159 [2024-09-28 23:36:03.109274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:15.159 [2024-09-28 23:36:03.109286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.103 ms 00:15:15.159 [2024-09-28 23:36:03.109294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.159 [2024-09-28 23:36:03.132103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.159 [2024-09-28 23:36:03.132240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:15.160 [2024-09-28 23:36:03.132259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.769 ms 00:15:15.160 [2024-09-28 23:36:03.132266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.160 [2024-09-28 23:36:03.132305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.160 [2024-09-28 23:36:03.132314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:15.160 [2024-09-28 23:36:03.132326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:15.160 [2024-09-28 23:36:03.132334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.160 [2024-09-28 23:36:03.132436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.160 [2024-09-28 23:36:03.132446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:15.160 [2024-09-28 23:36:03.132456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:15.160 [2024-09-28 23:36:03.132465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.160 [2024-09-28 23:36:03.133716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2628.613 ms, result 0 00:15:15.160 { 00:15:15.160 "name": "ftl0", 00:15:15.160 "uuid": "6b530286-c748-446d-97f2-aae22a10637f" 00:15:15.160 } 00:15:15.160 23:36:03 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:15.160 23:36:03 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:15.160 23:36:03 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:15.160 23:36:03 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:15.160 23:36:03 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:15.160 23:36:03 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:15.160 23:36:03 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:15.420 23:36:03 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:15.420 [ 00:15:15.420 { 00:15:15.420 "name": "ftl0", 00:15:15.420 "aliases": [ 00:15:15.420 "6b530286-c748-446d-97f2-aae22a10637f" 00:15:15.420 ], 00:15:15.420 "product_name": "FTL disk", 00:15:15.420 "block_size": 4096, 00:15:15.420 "num_blocks": 20971520, 00:15:15.421 "uuid": "6b530286-c748-446d-97f2-aae22a10637f", 00:15:15.421 "assigned_rate_limits": { 00:15:15.421 "rw_ios_per_sec": 0, 00:15:15.421 "rw_mbytes_per_sec": 0, 00:15:15.421 "r_mbytes_per_sec": 0, 00:15:15.421 "w_mbytes_per_sec": 0 00:15:15.421 }, 00:15:15.421 "claimed": false, 00:15:15.421 "zoned": false, 00:15:15.421 "supported_io_types": { 00:15:15.421 "read": true, 00:15:15.421 "write": true, 00:15:15.421 "unmap": true, 00:15:15.421 "flush": true, 00:15:15.421 "reset": false, 00:15:15.421 "nvme_admin": false, 00:15:15.421 "nvme_io": false, 00:15:15.421 "nvme_io_md": false, 00:15:15.421 "write_zeroes": true, 00:15:15.421 "zcopy": false, 00:15:15.421 "get_zone_info": false, 00:15:15.421 "zone_management": false, 00:15:15.421 "zone_append": false, 00:15:15.421 "compare": false, 00:15:15.421 "compare_and_write": false, 00:15:15.421 "abort": false, 00:15:15.421 "seek_hole": false, 00:15:15.421 "seek_data": false, 00:15:15.421 "copy": false, 00:15:15.421 "nvme_iov_md": false 00:15:15.421 }, 00:15:15.421 "driver_specific": { 00:15:15.421 "ftl": { 00:15:15.421 "base_bdev": "a29b946e-20f5-4fe1-bb21-aa81a961f09d", 00:15:15.421 "cache": "nvc0n1p0" 00:15:15.421 } 00:15:15.421 } 00:15:15.421 } 00:15:15.421 ] 00:15:15.421 23:36:03 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:15.421 23:36:03 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:15.421 23:36:03 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:15.679 23:36:03 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:15.679 23:36:03 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:15.938 [2024-09-28 23:36:03.946131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.938 [2024-09-28 23:36:03.946176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:15.938 [2024-09-28 23:36:03.946195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:15.938 [2024-09-28 23:36:03.946205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.938 [2024-09-28 23:36:03.946237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:15.938 [2024-09-28 23:36:03.948858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.938 [2024-09-28 23:36:03.948888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:15.938 [2024-09-28 23:36:03.948900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:15:15.938 [2024-09-28 23:36:03.948908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.938 [2024-09-28 23:36:03.949311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.938 [2024-09-28 23:36:03.949326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:15.938 [2024-09-28 23:36:03.949336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:15:15.938 [2024-09-28 23:36:03.949344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.938 [2024-09-28 23:36:03.952585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.938 [2024-09-28 23:36:03.952607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:15.938 [2024-09-28 23:36:03.952620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:15:15.938 [2024-09-28 23:36:03.952629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.938 [2024-09-28 23:36:03.958772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.938 [2024-09-28 23:36:03.958801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:15.938 [2024-09-28 23:36:03.958812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.116 ms 00:15:15.938 [2024-09-28 23:36:03.958820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.938 [2024-09-28 23:36:03.982755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.938 [2024-09-28 23:36:03.982788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:15.938 [2024-09-28 23:36:03.982801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.863 ms 00:15:15.938 [2024-09-28 23:36:03.982809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.938 [2024-09-28 23:36:03.997645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.938 [2024-09-28 23:36:03.997779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:15.938 [2024-09-28 23:36:03.997799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.793 ms 00:15:15.938 [2024-09-28 23:36:03.997808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.938 [2024-09-28 23:36:03.997976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.938 [2024-09-28 23:36:03.997988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:15.938 [2024-09-28 23:36:03.997998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:15:15.939 [2024-09-28 23:36:03.998007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.939 [2024-09-28 23:36:04.020949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.939 [2024-09-28 23:36:04.021062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:15.939 [2024-09-28 23:36:04.021081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.920 ms 00:15:15.939 [2024-09-28 23:36:04.021088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.939 [2024-09-28 23:36:04.043682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.939 [2024-09-28 23:36:04.043789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:15.939 [2024-09-28 23:36:04.043807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.556 ms 00:15:15.939 [2024-09-28 23:36:04.043814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.939 [2024-09-28 23:36:04.066470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.939 [2024-09-28 23:36:04.066499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:15.939 [2024-09-28 23:36:04.066526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.617 ms 00:15:15.939 [2024-09-28 23:36:04.066533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.939 [2024-09-28 23:36:04.088964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.939 [2024-09-28 23:36:04.088994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:15.939 [2024-09-28 23:36:04.089006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.329 ms 00:15:15.939 [2024-09-28 23:36:04.089013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.939 [2024-09-28 23:36:04.089053] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:15.939 [2024-09-28 23:36:04.089068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:15.939 [2024-09-28 23:36:04.089739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:15.940 [2024-09-28 23:36:04.089970] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:15.940 [2024-09-28 23:36:04.089978] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6b530286-c748-446d-97f2-aae22a10637f 00:15:15.940 [2024-09-28 23:36:04.089986] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:15.940 [2024-09-28 23:36:04.089996] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:15.940 [2024-09-28 23:36:04.090003] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:15.940 [2024-09-28 23:36:04.090012] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:15.940 [2024-09-28 23:36:04.090019] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:15.940 [2024-09-28 23:36:04.090027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:15.940 [2024-09-28 23:36:04.090034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:15.940 [2024-09-28 23:36:04.090042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:15.940 [2024-09-28 23:36:04.090049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:15.940 [2024-09-28 23:36:04.090057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.940 [2024-09-28 23:36:04.090064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:15.940 [2024-09-28 23:36:04.090073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:15:15.940 [2024-09-28 23:36:04.090082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.940 [2024-09-28 23:36:04.102491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.940 [2024-09-28 23:36:04.102540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:15.940 [2024-09-28 23:36:04.102553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.373 ms 00:15:15.940 [2024-09-28 23:36:04.102560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.940 [2024-09-28 23:36:04.102900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.940 [2024-09-28 23:36:04.102919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:15.940 [2024-09-28 23:36:04.102929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:15:15.940 [2024-09-28 23:36:04.102936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.198 [2024-09-28 23:36:04.146408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.198 [2024-09-28 23:36:04.146441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:16.198 [2024-09-28 23:36:04.146453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.198 [2024-09-28 23:36:04.146461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.198 [2024-09-28 23:36:04.146537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.198 [2024-09-28 23:36:04.146548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:16.198 [2024-09-28 23:36:04.146557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.198 [2024-09-28 23:36:04.146564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.198 [2024-09-28 23:36:04.146651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.198 [2024-09-28 23:36:04.146661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:16.198 [2024-09-28 23:36:04.146670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.198 [2024-09-28 23:36:04.146678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.198 [2024-09-28 23:36:04.146704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.198 [2024-09-28 23:36:04.146712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:16.198 [2024-09-28 23:36:04.146723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.198 [2024-09-28 23:36:04.146730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.198 [2024-09-28 23:36:04.227964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.198 [2024-09-28 23:36:04.228001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:16.198 [2024-09-28 23:36:04.228014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.198 [2024-09-28 23:36:04.228023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.198 [2024-09-28 23:36:04.290675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.198 [2024-09-28 23:36:04.290709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:16.198 [2024-09-28 23:36:04.290723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.198 [2024-09-28 23:36:04.290731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.198 [2024-09-28 23:36:04.290803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.198 [2024-09-28 23:36:04.290812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:16.198 [2024-09-28 23:36:04.290822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.198 [2024-09-28 23:36:04.290829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.199 [2024-09-28 23:36:04.290907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.199 [2024-09-28 23:36:04.290917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:16.199 [2024-09-28 23:36:04.290926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.199 [2024-09-28 23:36:04.290933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.199 [2024-09-28 23:36:04.291034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.199 [2024-09-28 23:36:04.291044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:16.199 [2024-09-28 23:36:04.291053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.199 [2024-09-28 23:36:04.291060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.199 [2024-09-28 23:36:04.291105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.199 [2024-09-28 23:36:04.291114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:16.199 [2024-09-28 23:36:04.291123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.199 [2024-09-28 23:36:04.291130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.199 [2024-09-28 23:36:04.291176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.199 [2024-09-28 23:36:04.291184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:16.199 [2024-09-28 23:36:04.291193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.199 [2024-09-28 23:36:04.291200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.199 [2024-09-28 23:36:04.291247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:16.199 [2024-09-28 23:36:04.291256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:16.199 [2024-09-28 23:36:04.291265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:16.199 [2024-09-28 23:36:04.291272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:16.199 [2024-09-28 23:36:04.291412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.266 ms, result 0 00:15:16.199 true 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72882 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72882 ']' 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72882 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72882 00:15:16.199 killing process with pid 72882 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72882' 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72882 00:15:16.199 23:36:04 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72882 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:22.809 23:36:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:22.809 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:22.809 fio-3.35 00:15:22.809 Starting 1 thread 00:15:26.998 00:15:26.998 test: (groupid=0, jobs=1): err= 0: pid=73056: Sat Sep 28 23:36:15 2024 00:15:26.998 read: IOPS=1171, BW=77.8MiB/s (81.5MB/s)(255MiB/3273msec) 00:15:26.998 slat (nsec): min=2874, max=95564, avg=4185.58, stdev=2417.85 00:15:26.998 clat (usec): min=254, max=1158, avg=387.52, stdev=125.95 00:15:26.998 lat (usec): min=257, max=1162, avg=391.70, stdev=126.46 00:15:26.998 clat percentiles (usec): 00:15:26.998 | 1.00th=[ 277], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:15:26.998 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:15:26.999 | 70.00th=[ 392], 80.00th=[ 461], 90.00th=[ 529], 95.00th=[ 652], 00:15:26.999 | 99.00th=[ 865], 99.50th=[ 930], 99.90th=[ 1057], 99.95th=[ 1123], 00:15:26.999 | 99.99th=[ 1156] 00:15:26.999 write: IOPS=1179, BW=78.3MiB/s (82.1MB/s)(256MiB/3270msec); 0 zone resets 00:15:26.999 slat (nsec): min=13599, max=50498, avg=18406.02, stdev=3777.53 00:15:26.999 clat (usec): min=282, max=1256, avg=427.74, stdev=138.85 00:15:26.999 lat (usec): min=305, max=1274, avg=446.14, stdev=139.59 00:15:26.999 clat percentiles (usec): 00:15:26.999 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:15:26.999 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 379], 00:15:26.999 | 70.00th=[ 441], 80.00th=[ 537], 90.00th=[ 594], 95.00th=[ 734], 00:15:26.999 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 1123], 99.95th=[ 1172], 00:15:26.999 | 99.99th=[ 1254] 00:15:26.999 bw ( KiB/s): min=59024, max=95744, per=98.79%, avg=79212.50, stdev=15133.36, samples=6 00:15:26.999 iops : min= 868, max= 1408, avg=1164.83, stdev=222.50, samples=6 00:15:26.999 lat (usec) : 500=80.69%, 750=14.89%, 1000=4.10% 00:15:26.999 lat (msec) : 2=0.33% 00:15:26.999 cpu : usr=99.27%, sys=0.09%, ctx=5, majf=0, minf=1169 00:15:26.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:26.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.999 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:26.999 00:15:26.999 Run status group 0 (all jobs): 00:15:26.999 READ: bw=77.8MiB/s (81.5MB/s), 77.8MiB/s-77.8MiB/s (81.5MB/s-81.5MB/s), io=255MiB (267MB), run=3273-3273msec 00:15:26.999 WRITE: bw=78.3MiB/s (82.1MB/s), 78.3MiB/s-78.3MiB/s (82.1MB/s-82.1MB/s), io=256MiB (269MB), run=3270-3270msec 00:15:28.443 ----------------------------------------------------- 00:15:28.443 Suppressions used: 00:15:28.443 count bytes template 00:15:28.443 1 5 /usr/src/fio/parse.c 00:15:28.443 1 8 libtcmalloc_minimal.so 00:15:28.443 1 904 libcrypto.so 00:15:28.443 ----------------------------------------------------- 00:15:28.443 00:15:28.443 23:36:16 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:28.443 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:28.443 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:28.702 23:36:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:28.702 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:28.702 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:28.702 fio-3.35 00:15:28.702 Starting 2 threads 00:15:55.253 00:15:55.253 first_half: (groupid=0, jobs=1): err= 0: pid=73153: Sat Sep 28 23:36:39 2024 00:15:55.253 read: IOPS=2983, BW=11.7MiB/s (12.2MB/s)(255MiB/21865msec) 00:15:55.253 slat (nsec): min=2987, max=18719, avg=3801.04, stdev=713.48 00:15:55.253 clat (usec): min=556, max=389020, avg=33896.93, stdev=17777.28 00:15:55.253 lat (usec): min=559, max=389024, avg=33900.73, stdev=17777.32 00:15:55.253 clat percentiles (msec): 00:15:55.253 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:15:55.253 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:15:55.253 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 46], 00:15:55.253 | 99.00th=[ 126], 99.50th=[ 146], 99.90th=[ 186], 99.95th=[ 326], 00:15:55.253 | 99.99th=[ 380] 00:15:55.253 write: IOPS=3887, BW=15.2MiB/s (15.9MB/s)(256MiB/16860msec); 0 zone resets 00:15:55.253 slat (usec): min=3, max=3800, avg= 5.44, stdev=15.50 00:15:55.253 clat (usec): min=350, max=72806, avg=8936.58, stdev=14370.28 00:15:55.253 lat (usec): min=356, max=72811, avg=8942.02, stdev=14370.32 00:15:55.253 clat percentiles (usec): 00:15:55.253 | 1.00th=[ 652], 5.00th=[ 783], 10.00th=[ 947], 20.00th=[ 1188], 00:15:55.253 | 30.00th=[ 2278], 40.00th=[ 3392], 50.00th=[ 4621], 60.00th=[ 5276], 00:15:55.253 | 70.00th=[ 6128], 80.00th=[10945], 90.00th=[16712], 95.00th=[56886], 00:15:55.253 | 99.00th=[64226], 99.50th=[66323], 99.90th=[68682], 99.95th=[69731], 00:15:55.253 | 99.99th=[71828] 00:15:55.253 bw ( KiB/s): min= 1720, max=44864, per=89.83%, avg=24966.10, stdev=15152.72, samples=21 00:15:55.253 iops : min= 430, max=11216, avg=6241.52, stdev=3788.18, samples=21 00:15:55.253 lat (usec) : 500=0.04%, 750=1.89%, 1000=4.30% 00:15:55.253 lat (msec) : 2=8.21%, 4=8.13%, 10=17.54%, 20=6.97%, 50=47.49% 00:15:55.253 lat (msec) : 100=4.53%, 250=0.87%, 500=0.04% 00:15:55.253 cpu : usr=99.41%, sys=0.14%, ctx=48, majf=0, minf=5593 00:15:55.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:55.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.253 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:55.253 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:55.253 second_half: (groupid=0, jobs=1): err= 0: pid=73154: Sat Sep 28 23:36:39 2024 00:15:55.253 read: IOPS=2962, BW=11.6MiB/s (12.1MB/s)(255MiB/22029msec) 00:15:55.253 slat (nsec): min=2972, max=37566, avg=3908.36, stdev=893.05 00:15:55.253 clat (usec): min=663, max=396712, avg=33232.51, stdev=19238.79 00:15:55.253 lat (usec): min=667, max=396716, avg=33236.42, stdev=19238.86 00:15:55.253 clat percentiles (msec): 00:15:55.253 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 30], 00:15:55.253 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:15:55.253 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 43], 00:15:55.253 | 99.00th=[ 136], 99.50th=[ 157], 99.90th=[ 215], 99.95th=[ 275], 00:15:55.253 | 99.99th=[ 393] 00:15:55.254 write: IOPS=3473, BW=13.6MiB/s (14.2MB/s)(256MiB/18865msec); 0 zone resets 00:15:55.254 slat (usec): min=3, max=1259, avg= 5.62, stdev= 6.24 00:15:55.254 clat (usec): min=357, max=73030, avg=9922.66, stdev=15295.20 00:15:55.254 lat (usec): min=363, max=73035, avg=9928.27, stdev=15295.23 00:15:55.254 clat percentiles (usec): 00:15:55.254 | 1.00th=[ 644], 5.00th=[ 742], 10.00th=[ 848], 20.00th=[ 1090], 00:15:55.254 | 30.00th=[ 1942], 40.00th=[ 3130], 50.00th=[ 4293], 60.00th=[ 5211], 00:15:55.254 | 70.00th=[ 6652], 80.00th=[13566], 90.00th=[27919], 95.00th=[57934], 00:15:55.254 | 99.00th=[64750], 99.50th=[66323], 99.90th=[69731], 99.95th=[71828], 00:15:55.254 | 99.99th=[72877] 00:15:55.254 bw ( KiB/s): min= 960, max=45744, per=85.76%, avg=23834.27, stdev=12588.90, samples=22 00:15:55.254 iops : min= 240, max=11436, avg=5958.55, stdev=3147.21, samples=22 00:15:55.254 lat (usec) : 500=0.03%, 750=2.67%, 1000=5.42% 00:15:55.254 lat (msec) : 2=7.21%, 4=8.43%, 10=15.69%, 20=6.91%, 50=48.29% 00:15:55.254 lat (msec) : 100=4.33%, 250=1.00%, 500=0.03% 00:15:55.254 cpu : usr=99.22%, sys=0.15%, ctx=1260, majf=0, minf=5516 00:15:55.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:55.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.254 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:55.254 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.254 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:55.254 00:15:55.254 Run status group 0 (all jobs): 00:15:55.254 READ: bw=23.1MiB/s (24.3MB/s), 11.6MiB/s-11.7MiB/s (12.1MB/s-12.2MB/s), io=510MiB (534MB), run=21865-22029msec 00:15:55.254 WRITE: bw=27.1MiB/s (28.5MB/s), 13.6MiB/s-15.2MiB/s (14.2MB/s-15.9MB/s), io=512MiB (537MB), run=16860-18865msec 00:15:55.254 ----------------------------------------------------- 00:15:55.254 Suppressions used: 00:15:55.254 count bytes template 00:15:55.254 2 10 /usr/src/fio/parse.c 00:15:55.254 1 96 /usr/src/fio/iolog.c 00:15:55.254 1 8 libtcmalloc_minimal.so 00:15:55.254 1 904 libcrypto.so 00:15:55.254 ----------------------------------------------------- 00:15:55.254 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:55.254 23:36:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:55.254 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:55.254 fio-3.35 00:15:55.254 Starting 1 thread 00:16:07.479 00:16:07.479 test: (groupid=0, jobs=1): err= 0: pid=73447: Sat Sep 28 23:36:54 2024 00:16:07.479 read: IOPS=8258, BW=32.3MiB/s (33.8MB/s)(255MiB/7895msec) 00:16:07.479 slat (nsec): min=3045, max=21924, avg=3464.93, stdev=632.25 00:16:07.479 clat (usec): min=480, max=30340, avg=15491.22, stdev=1370.84 00:16:07.479 lat (usec): min=487, max=30343, avg=15494.68, stdev=1370.85 00:16:07.479 clat percentiles (usec): 00:16:07.479 | 1.00th=[14484], 5.00th=[14615], 10.00th=[14746], 20.00th=[14877], 00:16:07.479 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15270], 60.00th=[15401], 00:16:07.479 | 70.00th=[15533], 80.00th=[15664], 90.00th=[16057], 95.00th=[18220], 00:16:07.479 | 99.00th=[21890], 99.50th=[22938], 99.90th=[24511], 99.95th=[26608], 00:16:07.479 | 99.99th=[29754] 00:16:07.479 write: IOPS=17.5k, BW=68.4MiB/s (71.7MB/s)(256MiB/3745msec); 0 zone resets 00:16:07.479 slat (usec): min=4, max=159, avg= 5.94, stdev= 2.08 00:16:07.479 clat (usec): min=461, max=44606, avg=7272.83, stdev=9301.86 00:16:07.479 lat (usec): min=467, max=44611, avg=7278.76, stdev=9301.81 00:16:07.479 clat percentiles (usec): 00:16:07.479 | 1.00th=[ 603], 5.00th=[ 676], 10.00th=[ 750], 20.00th=[ 898], 00:16:07.479 | 30.00th=[ 1029], 40.00th=[ 1319], 50.00th=[ 4555], 60.00th=[ 5276], 00:16:07.479 | 70.00th=[ 6259], 80.00th=[ 7832], 90.00th=[27395], 95.00th=[28705], 00:16:07.479 | 99.00th=[30802], 99.50th=[32637], 99.90th=[34341], 99.95th=[36963], 00:16:07.479 | 99.99th=[43254] 00:16:07.479 bw ( KiB/s): min=28328, max=95440, per=93.63%, avg=65536.00, stdev=19986.18, samples=8 00:16:07.479 iops : min= 7082, max=23860, avg=16384.00, stdev=4996.54, samples=8 00:16:07.479 lat (usec) : 500=0.02%, 750=5.05%, 1000=8.92% 00:16:07.479 lat (msec) : 2=6.73%, 4=1.64%, 10=19.65%, 20=48.87%, 50=9.13% 00:16:07.479 cpu : usr=99.09%, sys=0.27%, ctx=15, majf=0, minf=5565 00:16:07.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:07.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.479 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.479 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.479 00:16:07.479 Run status group 0 (all jobs): 00:16:07.479 READ: bw=32.3MiB/s (33.8MB/s), 32.3MiB/s-32.3MiB/s (33.8MB/s-33.8MB/s), io=255MiB (267MB), run=7895-7895msec 00:16:07.479 WRITE: bw=68.4MiB/s (71.7MB/s), 68.4MiB/s-68.4MiB/s (71.7MB/s-71.7MB/s), io=256MiB (268MB), run=3745-3745msec 00:16:08.045 ----------------------------------------------------- 00:16:08.045 Suppressions used: 00:16:08.045 count bytes template 00:16:08.045 1 5 /usr/src/fio/parse.c 00:16:08.045 2 192 /usr/src/fio/iolog.c 00:16:08.045 1 8 libtcmalloc_minimal.so 00:16:08.045 1 904 libcrypto.so 00:16:08.045 ----------------------------------------------------- 00:16:08.045 00:16:08.045 23:36:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:08.045 23:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:08.045 23:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:08.045 23:36:56 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:08.045 Remove shared memory files 00:16:08.045 23:36:56 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:08.045 23:36:56 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:08.045 23:36:56 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:08.045 23:36:56 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:08.046 23:36:56 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57530 /dev/shm/spdk_tgt_trace.pid71799 00:16:08.046 23:36:56 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:08.046 23:36:56 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:08.046 ************************************ 00:16:08.046 END TEST ftl_fio_basic 00:16:08.046 ************************************ 00:16:08.046 00:16:08.046 real 0m59.256s 00:16:08.046 user 2m10.187s 00:16:08.046 sys 0m2.562s 00:16:08.046 23:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.046 23:36:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:08.305 23:36:56 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:08.305 23:36:56 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:08.305 23:36:56 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.305 23:36:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:08.305 ************************************ 00:16:08.305 START TEST ftl_bdevperf 00:16:08.305 ************************************ 00:16:08.305 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:08.305 * Looking for test storage... 00:16:08.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:08.305 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:08.305 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:08.305 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:08.305 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:08.305 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.305 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.306 --rc genhtml_branch_coverage=1 00:16:08.306 --rc genhtml_function_coverage=1 00:16:08.306 --rc genhtml_legend=1 00:16:08.306 --rc geninfo_all_blocks=1 00:16:08.306 --rc geninfo_unexecuted_blocks=1 00:16:08.306 00:16:08.306 ' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.306 --rc genhtml_branch_coverage=1 00:16:08.306 --rc genhtml_function_coverage=1 00:16:08.306 --rc genhtml_legend=1 00:16:08.306 --rc geninfo_all_blocks=1 00:16:08.306 --rc geninfo_unexecuted_blocks=1 00:16:08.306 00:16:08.306 ' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.306 --rc genhtml_branch_coverage=1 00:16:08.306 --rc genhtml_function_coverage=1 00:16:08.306 --rc genhtml_legend=1 00:16:08.306 --rc geninfo_all_blocks=1 00:16:08.306 --rc geninfo_unexecuted_blocks=1 00:16:08.306 00:16:08.306 ' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:08.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.306 --rc genhtml_branch_coverage=1 00:16:08.306 --rc genhtml_function_coverage=1 00:16:08.306 --rc genhtml_legend=1 00:16:08.306 --rc geninfo_all_blocks=1 00:16:08.306 --rc geninfo_unexecuted_blocks=1 00:16:08.306 00:16:08.306 ' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73668 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73668 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73668 ']' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:08.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.306 23:36:56 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:08.306 [2024-09-28 23:36:56.423974] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:08.306 [2024-09-28 23:36:56.424090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73668 ] 00:16:08.565 [2024-09-28 23:36:56.572959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.824 [2024-09-28 23:36:56.749852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:09.393 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:09.652 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:09.652 { 00:16:09.652 "name": "nvme0n1", 00:16:09.652 "aliases": [ 00:16:09.652 "6fd6339d-bdb3-4039-be3c-4622702aa0f3" 00:16:09.652 ], 00:16:09.652 "product_name": "NVMe disk", 00:16:09.652 "block_size": 4096, 00:16:09.652 "num_blocks": 1310720, 00:16:09.652 "uuid": "6fd6339d-bdb3-4039-be3c-4622702aa0f3", 00:16:09.652 "numa_id": -1, 00:16:09.652 "assigned_rate_limits": { 00:16:09.652 "rw_ios_per_sec": 0, 00:16:09.652 "rw_mbytes_per_sec": 0, 00:16:09.652 "r_mbytes_per_sec": 0, 00:16:09.652 "w_mbytes_per_sec": 0 00:16:09.652 }, 00:16:09.652 "claimed": true, 00:16:09.652 "claim_type": "read_many_write_one", 00:16:09.652 "zoned": false, 00:16:09.652 "supported_io_types": { 00:16:09.652 "read": true, 00:16:09.652 "write": true, 00:16:09.652 "unmap": true, 00:16:09.652 "flush": true, 00:16:09.652 "reset": true, 00:16:09.652 "nvme_admin": true, 00:16:09.652 "nvme_io": true, 00:16:09.652 "nvme_io_md": false, 00:16:09.652 "write_zeroes": true, 00:16:09.652 "zcopy": false, 00:16:09.652 "get_zone_info": false, 00:16:09.652 "zone_management": false, 00:16:09.652 "zone_append": false, 00:16:09.652 "compare": true, 00:16:09.652 "compare_and_write": false, 00:16:09.652 "abort": true, 00:16:09.652 "seek_hole": false, 00:16:09.652 "seek_data": false, 00:16:09.652 "copy": true, 00:16:09.652 "nvme_iov_md": false 00:16:09.652 }, 00:16:09.652 "driver_specific": { 00:16:09.652 "nvme": [ 00:16:09.652 { 00:16:09.652 "pci_address": "0000:00:11.0", 00:16:09.652 "trid": { 00:16:09.652 "trtype": "PCIe", 00:16:09.652 "traddr": "0000:00:11.0" 00:16:09.652 }, 00:16:09.652 "ctrlr_data": { 00:16:09.652 "cntlid": 0, 00:16:09.652 "vendor_id": "0x1b36", 00:16:09.652 "model_number": "QEMU NVMe Ctrl", 00:16:09.652 "serial_number": "12341", 00:16:09.652 "firmware_revision": "8.0.0", 00:16:09.652 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:09.652 "oacs": { 00:16:09.652 "security": 0, 00:16:09.652 "format": 1, 00:16:09.652 "firmware": 0, 00:16:09.652 "ns_manage": 1 00:16:09.652 }, 00:16:09.652 "multi_ctrlr": false, 00:16:09.652 "ana_reporting": false 00:16:09.652 }, 00:16:09.652 "vs": { 00:16:09.652 "nvme_version": "1.4" 00:16:09.652 }, 00:16:09.652 "ns_data": { 00:16:09.652 "id": 1, 00:16:09.652 "can_share": false 00:16:09.652 } 00:16:09.652 } 00:16:09.652 ], 00:16:09.652 "mp_policy": "active_passive" 00:16:09.652 } 00:16:09.652 } 00:16:09.652 ]' 00:16:09.652 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:09.652 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:09.652 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:09.652 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:09.652 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:09.653 23:36:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:09.653 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:09.653 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:09.653 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:09.653 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:09.653 23:36:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:09.911 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=35cac374-e891-4857-a608-1ca5cec34c37 00:16:09.911 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:09.911 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 35cac374-e891-4857-a608-1ca5cec34c37 00:16:10.170 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:10.428 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c0e67e68-4a98-4713-b92d-512bc7212ede 00:16:10.429 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c0e67e68-4a98-4713-b92d-512bc7212ede 00:16:10.429 23:36:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:10.687 23:36:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:10.687 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:10.687 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:10.687 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:10.687 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:10.688 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:10.688 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:10.688 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:10.688 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:10.688 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:10.688 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:10.688 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:10.688 { 00:16:10.688 "name": "3250a5bd-7fee-4a1f-8374-1994c337c8c8", 00:16:10.688 "aliases": [ 00:16:10.688 "lvs/nvme0n1p0" 00:16:10.688 ], 00:16:10.688 "product_name": "Logical Volume", 00:16:10.688 "block_size": 4096, 00:16:10.688 "num_blocks": 26476544, 00:16:10.688 "uuid": "3250a5bd-7fee-4a1f-8374-1994c337c8c8", 00:16:10.688 "assigned_rate_limits": { 00:16:10.688 "rw_ios_per_sec": 0, 00:16:10.688 "rw_mbytes_per_sec": 0, 00:16:10.688 "r_mbytes_per_sec": 0, 00:16:10.688 "w_mbytes_per_sec": 0 00:16:10.688 }, 00:16:10.688 "claimed": false, 00:16:10.688 "zoned": false, 00:16:10.688 "supported_io_types": { 00:16:10.688 "read": true, 00:16:10.688 "write": true, 00:16:10.688 "unmap": true, 00:16:10.688 "flush": false, 00:16:10.688 "reset": true, 00:16:10.688 "nvme_admin": false, 00:16:10.688 "nvme_io": false, 00:16:10.688 "nvme_io_md": false, 00:16:10.688 "write_zeroes": true, 00:16:10.688 "zcopy": false, 00:16:10.688 "get_zone_info": false, 00:16:10.688 "zone_management": false, 00:16:10.688 "zone_append": false, 00:16:10.688 "compare": false, 00:16:10.688 "compare_and_write": false, 00:16:10.688 "abort": false, 00:16:10.688 "seek_hole": true, 00:16:10.688 "seek_data": true, 00:16:10.688 "copy": false, 00:16:10.688 "nvme_iov_md": false 00:16:10.688 }, 00:16:10.688 "driver_specific": { 00:16:10.688 "lvol": { 00:16:10.688 "lvol_store_uuid": "c0e67e68-4a98-4713-b92d-512bc7212ede", 00:16:10.688 "base_bdev": "nvme0n1", 00:16:10.688 "thin_provision": true, 00:16:10.688 "num_allocated_clusters": 0, 00:16:10.688 "snapshot": false, 00:16:10.688 "clone": false, 00:16:10.688 "esnap_clone": false 00:16:10.688 } 00:16:10.688 } 00:16:10.688 } 00:16:10.688 ]' 00:16:10.688 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:10.688 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:10.688 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:10.947 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:10.947 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:10.947 23:36:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:10.947 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:10.947 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:10.947 23:36:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:11.205 { 00:16:11.205 "name": "3250a5bd-7fee-4a1f-8374-1994c337c8c8", 00:16:11.205 "aliases": [ 00:16:11.205 "lvs/nvme0n1p0" 00:16:11.205 ], 00:16:11.205 "product_name": "Logical Volume", 00:16:11.205 "block_size": 4096, 00:16:11.205 "num_blocks": 26476544, 00:16:11.205 "uuid": "3250a5bd-7fee-4a1f-8374-1994c337c8c8", 00:16:11.205 "assigned_rate_limits": { 00:16:11.205 "rw_ios_per_sec": 0, 00:16:11.205 "rw_mbytes_per_sec": 0, 00:16:11.205 "r_mbytes_per_sec": 0, 00:16:11.205 "w_mbytes_per_sec": 0 00:16:11.205 }, 00:16:11.205 "claimed": false, 00:16:11.205 "zoned": false, 00:16:11.205 "supported_io_types": { 00:16:11.205 "read": true, 00:16:11.205 "write": true, 00:16:11.205 "unmap": true, 00:16:11.205 "flush": false, 00:16:11.205 "reset": true, 00:16:11.205 "nvme_admin": false, 00:16:11.205 "nvme_io": false, 00:16:11.205 "nvme_io_md": false, 00:16:11.205 "write_zeroes": true, 00:16:11.205 "zcopy": false, 00:16:11.205 "get_zone_info": false, 00:16:11.205 "zone_management": false, 00:16:11.205 "zone_append": false, 00:16:11.205 "compare": false, 00:16:11.205 "compare_and_write": false, 00:16:11.205 "abort": false, 00:16:11.205 "seek_hole": true, 00:16:11.205 "seek_data": true, 00:16:11.205 "copy": false, 00:16:11.205 "nvme_iov_md": false 00:16:11.205 }, 00:16:11.205 "driver_specific": { 00:16:11.205 "lvol": { 00:16:11.205 "lvol_store_uuid": "c0e67e68-4a98-4713-b92d-512bc7212ede", 00:16:11.205 "base_bdev": "nvme0n1", 00:16:11.205 "thin_provision": true, 00:16:11.205 "num_allocated_clusters": 0, 00:16:11.205 "snapshot": false, 00:16:11.205 "clone": false, 00:16:11.205 "esnap_clone": false 00:16:11.205 } 00:16:11.205 } 00:16:11.205 } 00:16:11.205 ]' 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:11.205 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:11.463 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:11.463 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:11.463 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:11.463 23:36:59 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:11.463 23:36:59 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:11.464 23:36:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:11.464 23:36:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:11.464 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:11.464 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:11.464 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:11.464 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:11.464 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3250a5bd-7fee-4a1f-8374-1994c337c8c8 00:16:11.723 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:11.723 { 00:16:11.723 "name": "3250a5bd-7fee-4a1f-8374-1994c337c8c8", 00:16:11.723 "aliases": [ 00:16:11.723 "lvs/nvme0n1p0" 00:16:11.723 ], 00:16:11.723 "product_name": "Logical Volume", 00:16:11.723 "block_size": 4096, 00:16:11.723 "num_blocks": 26476544, 00:16:11.723 "uuid": "3250a5bd-7fee-4a1f-8374-1994c337c8c8", 00:16:11.723 "assigned_rate_limits": { 00:16:11.723 "rw_ios_per_sec": 0, 00:16:11.723 "rw_mbytes_per_sec": 0, 00:16:11.723 "r_mbytes_per_sec": 0, 00:16:11.723 "w_mbytes_per_sec": 0 00:16:11.723 }, 00:16:11.723 "claimed": false, 00:16:11.723 "zoned": false, 00:16:11.723 "supported_io_types": { 00:16:11.723 "read": true, 00:16:11.723 "write": true, 00:16:11.723 "unmap": true, 00:16:11.723 "flush": false, 00:16:11.723 "reset": true, 00:16:11.723 "nvme_admin": false, 00:16:11.723 "nvme_io": false, 00:16:11.723 "nvme_io_md": false, 00:16:11.723 "write_zeroes": true, 00:16:11.723 "zcopy": false, 00:16:11.723 "get_zone_info": false, 00:16:11.723 "zone_management": false, 00:16:11.723 "zone_append": false, 00:16:11.723 "compare": false, 00:16:11.723 "compare_and_write": false, 00:16:11.723 "abort": false, 00:16:11.723 "seek_hole": true, 00:16:11.723 "seek_data": true, 00:16:11.723 "copy": false, 00:16:11.723 "nvme_iov_md": false 00:16:11.723 }, 00:16:11.723 "driver_specific": { 00:16:11.723 "lvol": { 00:16:11.723 "lvol_store_uuid": "c0e67e68-4a98-4713-b92d-512bc7212ede", 00:16:11.723 "base_bdev": "nvme0n1", 00:16:11.723 "thin_provision": true, 00:16:11.723 "num_allocated_clusters": 0, 00:16:11.723 "snapshot": false, 00:16:11.723 "clone": false, 00:16:11.723 "esnap_clone": false 00:16:11.723 } 00:16:11.723 } 00:16:11.723 } 00:16:11.723 ]' 00:16:11.723 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:11.723 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:11.723 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:11.723 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:11.723 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:11.723 23:36:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:11.723 23:36:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:11.723 23:36:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3250a5bd-7fee-4a1f-8374-1994c337c8c8 -c nvc0n1p0 --l2p_dram_limit 20 00:16:11.983 [2024-09-28 23:36:59.936065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.936109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:11.983 [2024-09-28 23:36:59.936121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:11.983 [2024-09-28 23:36:59.936129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.936171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.936180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:11.983 [2024-09-28 23:36:59.936187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:11.983 [2024-09-28 23:36:59.936206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.936220] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:11.983 [2024-09-28 23:36:59.936804] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:11.983 [2024-09-28 23:36:59.936823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.936830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:11.983 [2024-09-28 23:36:59.936837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:16:11.983 [2024-09-28 23:36:59.936844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.936896] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2f1c8c09-ef3d-4f9c-9b2c-8fcc7da09a3b 00:16:11.983 [2024-09-28 23:36:59.937831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.937863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:11.983 [2024-09-28 23:36:59.937875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:11.983 [2024-09-28 23:36:59.937881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.942522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.942585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:11.983 [2024-09-28 23:36:59.942595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.613 ms 00:16:11.983 [2024-09-28 23:36:59.942601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.942665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.942672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:11.983 [2024-09-28 23:36:59.942682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:11.983 [2024-09-28 23:36:59.942688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.942729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.942737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:11.983 [2024-09-28 23:36:59.942746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:11.983 [2024-09-28 23:36:59.942752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.942768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:11.983 [2024-09-28 23:36:59.945628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.945655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:11.983 [2024-09-28 23:36:59.945662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.866 ms 00:16:11.983 [2024-09-28 23:36:59.945669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.945692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.945701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:11.983 [2024-09-28 23:36:59.945707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:11.983 [2024-09-28 23:36:59.945714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.945725] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:11.983 [2024-09-28 23:36:59.945831] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:11.983 [2024-09-28 23:36:59.945841] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:11.983 [2024-09-28 23:36:59.945850] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:11.983 [2024-09-28 23:36:59.945858] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:11.983 [2024-09-28 23:36:59.945867] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:11.983 [2024-09-28 23:36:59.945873] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:11.983 [2024-09-28 23:36:59.945881] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:11.983 [2024-09-28 23:36:59.945887] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:11.983 [2024-09-28 23:36:59.945893] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:11.983 [2024-09-28 23:36:59.945899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.945906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:11.983 [2024-09-28 23:36:59.945912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:16:11.983 [2024-09-28 23:36:59.945919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.945980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.983 [2024-09-28 23:36:59.945989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:11.983 [2024-09-28 23:36:59.945995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:11.983 [2024-09-28 23:36:59.946004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.983 [2024-09-28 23:36:59.946072] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:11.983 [2024-09-28 23:36:59.946080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:11.983 [2024-09-28 23:36:59.946086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:11.983 [2024-09-28 23:36:59.946093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.983 [2024-09-28 23:36:59.946099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:11.983 [2024-09-28 23:36:59.946106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:11.983 [2024-09-28 23:36:59.946111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:11.983 [2024-09-28 23:36:59.946118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:11.983 [2024-09-28 23:36:59.946123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:11.983 [2024-09-28 23:36:59.946129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:11.983 [2024-09-28 23:36:59.946134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:11.983 [2024-09-28 23:36:59.946146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:11.983 [2024-09-28 23:36:59.946151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:11.983 [2024-09-28 23:36:59.946158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:11.983 [2024-09-28 23:36:59.946164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:11.983 [2024-09-28 23:36:59.946172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.983 [2024-09-28 23:36:59.946178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:11.983 [2024-09-28 23:36:59.946184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:11.983 [2024-09-28 23:36:59.946189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.983 [2024-09-28 23:36:59.946197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:11.983 [2024-09-28 23:36:59.946202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:11.984 [2024-09-28 23:36:59.946208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.984 [2024-09-28 23:36:59.946213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:11.984 [2024-09-28 23:36:59.946226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:11.984 [2024-09-28 23:36:59.946232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.984 [2024-09-28 23:36:59.946238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:11.984 [2024-09-28 23:36:59.946243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:11.984 [2024-09-28 23:36:59.946249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.984 [2024-09-28 23:36:59.946254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:11.984 [2024-09-28 23:36:59.946260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:11.984 [2024-09-28 23:36:59.946265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.984 [2024-09-28 23:36:59.946273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:11.984 [2024-09-28 23:36:59.946278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:11.984 [2024-09-28 23:36:59.946284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:11.984 [2024-09-28 23:36:59.946289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:11.984 [2024-09-28 23:36:59.946295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:11.984 [2024-09-28 23:36:59.946300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:11.984 [2024-09-28 23:36:59.946306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:11.984 [2024-09-28 23:36:59.946311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:11.984 [2024-09-28 23:36:59.946318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.984 [2024-09-28 23:36:59.946322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:11.984 [2024-09-28 23:36:59.946329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:11.984 [2024-09-28 23:36:59.946334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.984 [2024-09-28 23:36:59.946339] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:11.984 [2024-09-28 23:36:59.946346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:11.984 [2024-09-28 23:36:59.946352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:11.984 [2024-09-28 23:36:59.946358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.984 [2024-09-28 23:36:59.946369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:11.984 [2024-09-28 23:36:59.946374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:11.984 [2024-09-28 23:36:59.946380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:11.984 [2024-09-28 23:36:59.946386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:11.984 [2024-09-28 23:36:59.946392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:11.984 [2024-09-28 23:36:59.946397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:11.984 [2024-09-28 23:36:59.946406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:11.984 [2024-09-28 23:36:59.946414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:11.984 [2024-09-28 23:36:59.946422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:11.984 [2024-09-28 23:36:59.946427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:11.984 [2024-09-28 23:36:59.946434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:11.984 [2024-09-28 23:36:59.946439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:11.984 [2024-09-28 23:36:59.946446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:11.984 [2024-09-28 23:36:59.946451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:11.984 [2024-09-28 23:36:59.946458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:11.984 [2024-09-28 23:36:59.946463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:11.984 [2024-09-28 23:36:59.946471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:11.984 [2024-09-28 23:36:59.946476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:11.984 [2024-09-28 23:36:59.946483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:11.984 [2024-09-28 23:36:59.946488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:11.984 [2024-09-28 23:36:59.946494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:11.984 [2024-09-28 23:36:59.946500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:11.984 [2024-09-28 23:36:59.946517] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:11.984 [2024-09-28 23:36:59.946523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:11.984 [2024-09-28 23:36:59.946530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:11.984 [2024-09-28 23:36:59.946536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:11.984 [2024-09-28 23:36:59.946543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:11.984 [2024-09-28 23:36:59.946548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:11.984 [2024-09-28 23:36:59.946555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.984 [2024-09-28 23:36:59.946561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:11.984 [2024-09-28 23:36:59.946568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:16:11.984 [2024-09-28 23:36:59.946574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.984 [2024-09-28 23:36:59.946602] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:11.984 [2024-09-28 23:36:59.946609] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:13.889 [2024-09-28 23:37:02.002385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.889 [2024-09-28 23:37:02.002445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:13.889 [2024-09-28 23:37:02.002463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2055.772 ms 00:16:13.889 [2024-09-28 23:37:02.002471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.889 [2024-09-28 23:37:02.038467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.889 [2024-09-28 23:37:02.038526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:13.889 [2024-09-28 23:37:02.038543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.799 ms 00:16:13.889 [2024-09-28 23:37:02.038551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.889 [2024-09-28 23:37:02.038704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.889 [2024-09-28 23:37:02.038716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:13.889 [2024-09-28 23:37:02.038731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:13.889 [2024-09-28 23:37:02.038739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.148 [2024-09-28 23:37:02.068934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.148 [2024-09-28 23:37:02.068974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.148 [2024-09-28 23:37:02.068991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.161 ms 00:16:14.148 [2024-09-28 23:37:02.068998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.069026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.069034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.149 [2024-09-28 23:37:02.069044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:14.149 [2024-09-28 23:37:02.069051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.069395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.069423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.149 [2024-09-28 23:37:02.069434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:16:14.149 [2024-09-28 23:37:02.069442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.069558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.069579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.149 [2024-09-28 23:37:02.069592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:14.149 [2024-09-28 23:37:02.069599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.081880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.081912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.149 [2024-09-28 23:37:02.081923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.263 ms 00:16:14.149 [2024-09-28 23:37:02.081931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.093102] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:14.149 [2024-09-28 23:37:02.097912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.097954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:14.149 [2024-09-28 23:37:02.097964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.915 ms 00:16:14.149 [2024-09-28 23:37:02.097973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.158821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.158870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:14.149 [2024-09-28 23:37:02.158883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.826 ms 00:16:14.149 [2024-09-28 23:37:02.158894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.159069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.159083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:14.149 [2024-09-28 23:37:02.159092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:16:14.149 [2024-09-28 23:37:02.159102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.182272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.182310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:14.149 [2024-09-28 23:37:02.182321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.131 ms 00:16:14.149 [2024-09-28 23:37:02.182333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.204754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.204788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:14.149 [2024-09-28 23:37:02.204799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.389 ms 00:16:14.149 [2024-09-28 23:37:02.204808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.205358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.205375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:14.149 [2024-09-28 23:37:02.205386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:16:14.149 [2024-09-28 23:37:02.205395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.272481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.272532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:14.149 [2024-09-28 23:37:02.272544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.045 ms 00:16:14.149 [2024-09-28 23:37:02.272554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.149 [2024-09-28 23:37:02.296029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.149 [2024-09-28 23:37:02.296070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:14.149 [2024-09-28 23:37:02.296081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.411 ms 00:16:14.149 [2024-09-28 23:37:02.296091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.408 [2024-09-28 23:37:02.319005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.408 [2024-09-28 23:37:02.319044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:14.408 [2024-09-28 23:37:02.319055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.880 ms 00:16:14.408 [2024-09-28 23:37:02.319064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.408 [2024-09-28 23:37:02.341759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.408 [2024-09-28 23:37:02.341797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:14.408 [2024-09-28 23:37:02.341808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.664 ms 00:16:14.408 [2024-09-28 23:37:02.341816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.408 [2024-09-28 23:37:02.341851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.408 [2024-09-28 23:37:02.341864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:14.408 [2024-09-28 23:37:02.341872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:14.408 [2024-09-28 23:37:02.341881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.408 [2024-09-28 23:37:02.341951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.408 [2024-09-28 23:37:02.341965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:14.408 [2024-09-28 23:37:02.341973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:14.408 [2024-09-28 23:37:02.341982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.408 [2024-09-28 23:37:02.343074] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2406.569 ms, result 0 00:16:14.408 { 00:16:14.408 "name": "ftl0", 00:16:14.408 "uuid": "2f1c8c09-ef3d-4f9c-9b2c-8fcc7da09a3b" 00:16:14.408 } 00:16:14.408 23:37:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:14.408 23:37:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:14.408 23:37:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:14.408 23:37:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:14.667 [2024-09-28 23:37:02.655175] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:14.667 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:14.667 Zero copy mechanism will not be used. 00:16:14.667 Running I/O for 4 seconds... 00:16:18.843 3163.00 IOPS, 210.04 MiB/s 3195.00 IOPS, 212.17 MiB/s 3202.00 IOPS, 212.63 MiB/s 3198.00 IOPS, 212.37 MiB/s 00:16:18.843 Latency(us) 00:16:18.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.843 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:18.843 ftl0 : 4.00 3196.74 212.28 0.00 0.00 328.11 154.39 2230.74 00:16:18.843 =================================================================================================================== 00:16:18.843 Total : 3196.74 212.28 0.00 0.00 328.11 154.39 2230.74 00:16:18.843 [2024-09-28 23:37:06.665160] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:18.843 { 00:16:18.843 "results": [ 00:16:18.843 { 00:16:18.843 "job": "ftl0", 00:16:18.843 "core_mask": "0x1", 00:16:18.843 "workload": "randwrite", 00:16:18.843 "status": "finished", 00:16:18.843 "queue_depth": 1, 00:16:18.843 "io_size": 69632, 00:16:18.843 "runtime": 4.001889, 00:16:18.843 "iops": 3196.7403393747304, 00:16:18.843 "mibps": 212.2835381616032, 00:16:18.843 "io_failed": 0, 00:16:18.843 "io_timeout": 0, 00:16:18.843 "avg_latency_us": 328.11118965299534, 00:16:18.843 "min_latency_us": 154.3876923076923, 00:16:18.843 "max_latency_us": 2230.7446153846154 00:16:18.843 } 00:16:18.843 ], 00:16:18.843 "core_count": 1 00:16:18.843 } 00:16:18.843 23:37:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:18.843 [2024-09-28 23:37:06.772162] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:18.843 Running I/O for 4 seconds... 00:16:23.024 11358.00 IOPS, 44.37 MiB/s 11158.50 IOPS, 43.59 MiB/s 11127.67 IOPS, 43.47 MiB/s 11204.00 IOPS, 43.77 MiB/s 00:16:23.024 Latency(us) 00:16:23.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.024 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.024 ftl0 : 4.01 11195.36 43.73 0.00 0.00 11411.59 223.70 26617.70 00:16:23.024 =================================================================================================================== 00:16:23.024 Total : 11195.36 43.73 0.00 0.00 11411.59 0.00 26617.70 00:16:23.024 [2024-09-28 23:37:10.794853] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:23.024 { 00:16:23.024 "results": [ 00:16:23.024 { 00:16:23.024 "job": "ftl0", 00:16:23.024 "core_mask": "0x1", 00:16:23.024 "workload": "randwrite", 00:16:23.024 "status": "finished", 00:16:23.024 "queue_depth": 128, 00:16:23.024 "io_size": 4096, 00:16:23.024 "runtime": 4.014163, 00:16:23.024 "iops": 11195.360028977399, 00:16:23.024 "mibps": 43.73187511319296, 00:16:23.024 "io_failed": 0, 00:16:23.024 "io_timeout": 0, 00:16:23.024 "avg_latency_us": 11411.591593851632, 00:16:23.024 "min_latency_us": 223.70461538461538, 00:16:23.024 "max_latency_us": 26617.69846153846 00:16:23.024 } 00:16:23.024 ], 00:16:23.024 "core_count": 1 00:16:23.024 } 00:16:23.024 23:37:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:23.024 [2024-09-28 23:37:10.896245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:23.024 Running I/O for 4 seconds... 00:16:26.821 8892.00 IOPS, 34.73 MiB/s 8992.00 IOPS, 35.12 MiB/s 9039.00 IOPS, 35.31 MiB/s 9034.75 IOPS, 35.29 MiB/s 00:16:26.821 Latency(us) 00:16:26.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.821 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.821 Verification LBA range: start 0x0 length 0x1400000 00:16:26.821 ftl0 : 4.01 9046.21 35.34 0.00 0.00 14108.19 217.40 26012.75 00:16:26.821 =================================================================================================================== 00:16:26.821 Total : 9046.21 35.34 0.00 0.00 14108.19 0.00 26012.75 00:16:26.821 { 00:16:26.821 "results": [ 00:16:26.821 { 00:16:26.821 "job": "ftl0", 00:16:26.821 "core_mask": "0x1", 00:16:26.821 "workload": "verify", 00:16:26.821 "status": "finished", 00:16:26.821 "verify_range": { 00:16:26.821 "start": 0, 00:16:26.821 "length": 20971520 00:16:26.821 }, 00:16:26.821 "queue_depth": 128, 00:16:26.821 "io_size": 4096, 00:16:26.821 "runtime": 4.008972, 00:16:26.821 "iops": 9046.209352422517, 00:16:26.821 "mibps": 35.336755282900455, 00:16:26.821 "io_failed": 0, 00:16:26.821 "io_timeout": 0, 00:16:26.821 "avg_latency_us": 14108.185553410907, 00:16:26.821 "min_latency_us": 217.40307692307692, 00:16:26.821 "max_latency_us": 26012.75076923077 00:16:26.821 } 00:16:26.821 ], 00:16:26.821 "core_count": 1 00:16:26.821 } 00:16:26.821 [2024-09-28 23:37:14.919628] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:26.821 23:37:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:27.080 [2024-09-28 23:37:15.121545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.080 [2024-09-28 23:37:15.121588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:27.080 [2024-09-28 23:37:15.121601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:27.080 [2024-09-28 23:37:15.121611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.080 [2024-09-28 23:37:15.121631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:27.080 [2024-09-28 23:37:15.124192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.080 [2024-09-28 23:37:15.124218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:27.080 [2024-09-28 23:37:15.124230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.545 ms 00:16:27.080 [2024-09-28 23:37:15.124237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.080 [2024-09-28 23:37:15.125985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.080 [2024-09-28 23:37:15.126011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:27.080 [2024-09-28 23:37:15.126025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:16:27.080 [2024-09-28 23:37:15.126032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.340 [2024-09-28 23:37:15.266991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.340 [2024-09-28 23:37:15.267025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:27.340 [2024-09-28 23:37:15.267040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 140.939 ms 00:16:27.340 [2024-09-28 23:37:15.267048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.340 [2024-09-28 23:37:15.273224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.340 [2024-09-28 23:37:15.273254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:27.340 [2024-09-28 23:37:15.273265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.146 ms 00:16:27.340 [2024-09-28 23:37:15.273272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.340 [2024-09-28 23:37:15.296736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.340 [2024-09-28 23:37:15.296879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:27.340 [2024-09-28 23:37:15.296898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.410 ms 00:16:27.340 [2024-09-28 23:37:15.296906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.340 [2024-09-28 23:37:15.311682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.340 [2024-09-28 23:37:15.311807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:27.340 [2024-09-28 23:37:15.311827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.746 ms 00:16:27.340 [2024-09-28 23:37:15.311835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.340 [2024-09-28 23:37:15.311967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.340 [2024-09-28 23:37:15.311979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:27.340 [2024-09-28 23:37:15.311992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:27.340 [2024-09-28 23:37:15.312001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.340 [2024-09-28 23:37:15.334836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.340 [2024-09-28 23:37:15.334962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:27.340 [2024-09-28 23:37:15.334979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.818 ms 00:16:27.340 [2024-09-28 23:37:15.334987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.340 [2024-09-28 23:37:15.357580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.340 [2024-09-28 23:37:15.357690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:27.340 [2024-09-28 23:37:15.357708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.563 ms 00:16:27.340 [2024-09-28 23:37:15.357715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.340 [2024-09-28 23:37:15.380013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.340 [2024-09-28 23:37:15.380043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:27.340 [2024-09-28 23:37:15.380054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.268 ms 00:16:27.340 [2024-09-28 23:37:15.380061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.340 [2024-09-28 23:37:15.402416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.340 [2024-09-28 23:37:15.402445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:27.340 [2024-09-28 23:37:15.402458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.294 ms 00:16:27.340 [2024-09-28 23:37:15.402465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.340 [2024-09-28 23:37:15.402495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:27.340 [2024-09-28 23:37:15.402524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:27.340 [2024-09-28 23:37:15.402868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.402993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:27.341 [2024-09-28 23:37:15.403391] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:27.341 [2024-09-28 23:37:15.403400] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f1c8c09-ef3d-4f9c-9b2c-8fcc7da09a3b 00:16:27.341 [2024-09-28 23:37:15.403407] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:27.341 [2024-09-28 23:37:15.403416] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:27.341 [2024-09-28 23:37:15.403423] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:27.341 [2024-09-28 23:37:15.403431] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:27.341 [2024-09-28 23:37:15.403439] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:27.341 [2024-09-28 23:37:15.403448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:27.341 [2024-09-28 23:37:15.403455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:27.341 [2024-09-28 23:37:15.403464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:27.341 [2024-09-28 23:37:15.403470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:27.341 [2024-09-28 23:37:15.403478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.341 [2024-09-28 23:37:15.403485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:27.341 [2024-09-28 23:37:15.403497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:16:27.341 [2024-09-28 23:37:15.403504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.341 [2024-09-28 23:37:15.415691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.341 [2024-09-28 23:37:15.415793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:27.341 [2024-09-28 23:37:15.415872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.149 ms 00:16:27.341 [2024-09-28 23:37:15.415896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.341 [2024-09-28 23:37:15.416256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:27.341 [2024-09-28 23:37:15.416291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:27.341 [2024-09-28 23:37:15.416349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:16:27.342 [2024-09-28 23:37:15.416371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.342 [2024-09-28 23:37:15.446154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.342 [2024-09-28 23:37:15.446273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:27.342 [2024-09-28 23:37:15.446328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.342 [2024-09-28 23:37:15.446350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.342 [2024-09-28 23:37:15.446417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.342 [2024-09-28 23:37:15.446439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:27.342 [2024-09-28 23:37:15.446461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.342 [2024-09-28 23:37:15.446479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.342 [2024-09-28 23:37:15.446580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.342 [2024-09-28 23:37:15.446607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:27.342 [2024-09-28 23:37:15.446630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.342 [2024-09-28 23:37:15.446694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.342 [2024-09-28 23:37:15.446728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.342 [2024-09-28 23:37:15.446748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:27.342 [2024-09-28 23:37:15.446771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.342 [2024-09-28 23:37:15.446790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.601 [2024-09-28 23:37:15.523099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.601 [2024-09-28 23:37:15.523224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:27.601 [2024-09-28 23:37:15.523302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.601 [2024-09-28 23:37:15.523325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.601 [2024-09-28 23:37:15.585840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.601 [2024-09-28 23:37:15.585972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:27.601 [2024-09-28 23:37:15.586026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.601 [2024-09-28 23:37:15.586049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.601 [2024-09-28 23:37:15.586126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.601 [2024-09-28 23:37:15.586149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:27.601 [2024-09-28 23:37:15.586170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.601 [2024-09-28 23:37:15.586188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.601 [2024-09-28 23:37:15.586266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.601 [2024-09-28 23:37:15.586291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:27.601 [2024-09-28 23:37:15.586313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.601 [2024-09-28 23:37:15.586379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.601 [2024-09-28 23:37:15.586486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.601 [2024-09-28 23:37:15.586568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:27.601 [2024-09-28 23:37:15.586597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.601 [2024-09-28 23:37:15.586641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.601 [2024-09-28 23:37:15.586694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.601 [2024-09-28 23:37:15.586717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:27.601 [2024-09-28 23:37:15.586738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.601 [2024-09-28 23:37:15.586757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.601 [2024-09-28 23:37:15.586805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.601 [2024-09-28 23:37:15.586900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:27.601 [2024-09-28 23:37:15.586955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.601 [2024-09-28 23:37:15.586973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.601 [2024-09-28 23:37:15.587037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:27.601 [2024-09-28 23:37:15.587061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:27.601 [2024-09-28 23:37:15.587082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:27.601 [2024-09-28 23:37:15.587102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:27.601 [2024-09-28 23:37:15.587282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 465.695 ms, result 0 00:16:27.601 true 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73668 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73668 ']' 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73668 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73668 00:16:27.601 killing process with pid 73668 00:16:27.601 Received shutdown signal, test time was about 4.000000 seconds 00:16:27.601 00:16:27.601 Latency(us) 00:16:27.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.601 =================================================================================================================== 00:16:27.601 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73668' 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73668 00:16:27.601 23:37:15 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73668 00:16:32.867 Remove shared memory files 00:16:32.867 23:37:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:32.867 23:37:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:32.867 23:37:21 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:32.867 23:37:21 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:33.126 23:37:21 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:33.126 23:37:21 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:33.126 23:37:21 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:33.126 23:37:21 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:33.126 ************************************ 00:16:33.126 END TEST ftl_bdevperf 00:16:33.126 ************************************ 00:16:33.126 00:16:33.126 real 0m24.826s 00:16:33.126 user 0m27.369s 00:16:33.126 sys 0m0.844s 00:16:33.126 23:37:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.126 23:37:21 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:33.126 23:37:21 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:33.126 23:37:21 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:33.126 23:37:21 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.126 23:37:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:33.126 ************************************ 00:16:33.126 START TEST ftl_trim 00:16:33.126 ************************************ 00:16:33.126 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:33.126 * Looking for test storage... 00:16:33.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.126 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:33.126 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:33.126 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:33.126 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.126 23:37:21 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:33.126 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.126 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:33.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.126 --rc genhtml_branch_coverage=1 00:16:33.126 --rc genhtml_function_coverage=1 00:16:33.126 --rc genhtml_legend=1 00:16:33.126 --rc geninfo_all_blocks=1 00:16:33.126 --rc geninfo_unexecuted_blocks=1 00:16:33.126 00:16:33.126 ' 00:16:33.127 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:33.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.127 --rc genhtml_branch_coverage=1 00:16:33.127 --rc genhtml_function_coverage=1 00:16:33.127 --rc genhtml_legend=1 00:16:33.127 --rc geninfo_all_blocks=1 00:16:33.127 --rc geninfo_unexecuted_blocks=1 00:16:33.127 00:16:33.127 ' 00:16:33.127 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:33.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.127 --rc genhtml_branch_coverage=1 00:16:33.127 --rc genhtml_function_coverage=1 00:16:33.127 --rc genhtml_legend=1 00:16:33.127 --rc geninfo_all_blocks=1 00:16:33.127 --rc geninfo_unexecuted_blocks=1 00:16:33.127 00:16:33.127 ' 00:16:33.127 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:33.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.127 --rc genhtml_branch_coverage=1 00:16:33.127 --rc genhtml_function_coverage=1 00:16:33.127 --rc genhtml_legend=1 00:16:33.127 --rc geninfo_all_blocks=1 00:16:33.127 --rc geninfo_unexecuted_blocks=1 00:16:33.127 00:16:33.127 ' 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=74010 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 74010 00:16:33.127 23:37:21 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:33.127 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74010 ']' 00:16:33.127 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.127 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.127 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.127 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.127 23:37:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:33.386 [2024-09-28 23:37:21.329717] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:33.386 [2024-09-28 23:37:21.330042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74010 ] 00:16:33.386 [2024-09-28 23:37:21.477681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:33.644 [2024-09-28 23:37:21.656996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.644 [2024-09-28 23:37:21.657238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.644 [2024-09-28 23:37:21.657344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.210 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.210 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:34.210 23:37:22 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:34.210 23:37:22 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:34.210 23:37:22 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:34.210 23:37:22 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:34.210 23:37:22 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:34.210 23:37:22 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:34.468 23:37:22 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:34.468 23:37:22 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:34.468 23:37:22 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:34.468 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:34.468 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:34.468 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:34.468 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:34.468 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:34.726 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:34.726 { 00:16:34.726 "name": "nvme0n1", 00:16:34.726 "aliases": [ 00:16:34.726 "df1e34bd-e0a8-4cac-a9a2-a5b5e8a021b3" 00:16:34.726 ], 00:16:34.726 "product_name": "NVMe disk", 00:16:34.726 "block_size": 4096, 00:16:34.726 "num_blocks": 1310720, 00:16:34.726 "uuid": "df1e34bd-e0a8-4cac-a9a2-a5b5e8a021b3", 00:16:34.726 "numa_id": -1, 00:16:34.726 "assigned_rate_limits": { 00:16:34.726 "rw_ios_per_sec": 0, 00:16:34.726 "rw_mbytes_per_sec": 0, 00:16:34.726 "r_mbytes_per_sec": 0, 00:16:34.726 "w_mbytes_per_sec": 0 00:16:34.726 }, 00:16:34.726 "claimed": true, 00:16:34.726 "claim_type": "read_many_write_one", 00:16:34.726 "zoned": false, 00:16:34.726 "supported_io_types": { 00:16:34.726 "read": true, 00:16:34.726 "write": true, 00:16:34.726 "unmap": true, 00:16:34.726 "flush": true, 00:16:34.726 "reset": true, 00:16:34.726 "nvme_admin": true, 00:16:34.726 "nvme_io": true, 00:16:34.726 "nvme_io_md": false, 00:16:34.726 "write_zeroes": true, 00:16:34.726 "zcopy": false, 00:16:34.726 "get_zone_info": false, 00:16:34.726 "zone_management": false, 00:16:34.726 "zone_append": false, 00:16:34.726 "compare": true, 00:16:34.726 "compare_and_write": false, 00:16:34.726 "abort": true, 00:16:34.726 "seek_hole": false, 00:16:34.726 "seek_data": false, 00:16:34.726 "copy": true, 00:16:34.726 "nvme_iov_md": false 00:16:34.726 }, 00:16:34.726 "driver_specific": { 00:16:34.726 "nvme": [ 00:16:34.726 { 00:16:34.727 "pci_address": "0000:00:11.0", 00:16:34.727 "trid": { 00:16:34.727 "trtype": "PCIe", 00:16:34.727 "traddr": "0000:00:11.0" 00:16:34.727 }, 00:16:34.727 "ctrlr_data": { 00:16:34.727 "cntlid": 0, 00:16:34.727 "vendor_id": "0x1b36", 00:16:34.727 "model_number": "QEMU NVMe Ctrl", 00:16:34.727 "serial_number": "12341", 00:16:34.727 "firmware_revision": "8.0.0", 00:16:34.727 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:34.727 "oacs": { 00:16:34.727 "security": 0, 00:16:34.727 "format": 1, 00:16:34.727 "firmware": 0, 00:16:34.727 "ns_manage": 1 00:16:34.727 }, 00:16:34.727 "multi_ctrlr": false, 00:16:34.727 "ana_reporting": false 00:16:34.727 }, 00:16:34.727 "vs": { 00:16:34.727 "nvme_version": "1.4" 00:16:34.727 }, 00:16:34.727 "ns_data": { 00:16:34.727 "id": 1, 00:16:34.727 "can_share": false 00:16:34.727 } 00:16:34.727 } 00:16:34.727 ], 00:16:34.727 "mp_policy": "active_passive" 00:16:34.727 } 00:16:34.727 } 00:16:34.727 ]' 00:16:34.727 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:34.727 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:34.727 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:34.727 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:34.727 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:34.727 23:37:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:34.727 23:37:22 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:34.727 23:37:22 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:34.727 23:37:22 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:34.727 23:37:22 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:34.727 23:37:22 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:34.986 23:37:22 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c0e67e68-4a98-4713-b92d-512bc7212ede 00:16:34.986 23:37:22 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:34.986 23:37:22 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c0e67e68-4a98-4713-b92d-512bc7212ede 00:16:35.244 23:37:23 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:35.244 23:37:23 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=b000bb5f-0b3c-41d8-980f-4d958a3a2bbf 00:16:35.244 23:37:23 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b000bb5f-0b3c-41d8-980f-4d958a3a2bbf 00:16:35.502 23:37:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:35.502 23:37:23 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:35.502 23:37:23 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:35.502 23:37:23 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:35.502 23:37:23 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:35.502 23:37:23 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:35.502 23:37:23 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:35.502 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:35.502 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:35.502 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:35.502 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:35.502 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:35.759 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:35.759 { 00:16:35.759 "name": "05204f42-8aaf-4916-b3f0-59f181bbf21d", 00:16:35.759 "aliases": [ 00:16:35.759 "lvs/nvme0n1p0" 00:16:35.759 ], 00:16:35.759 "product_name": "Logical Volume", 00:16:35.759 "block_size": 4096, 00:16:35.759 "num_blocks": 26476544, 00:16:35.759 "uuid": "05204f42-8aaf-4916-b3f0-59f181bbf21d", 00:16:35.759 "assigned_rate_limits": { 00:16:35.759 "rw_ios_per_sec": 0, 00:16:35.759 "rw_mbytes_per_sec": 0, 00:16:35.759 "r_mbytes_per_sec": 0, 00:16:35.759 "w_mbytes_per_sec": 0 00:16:35.759 }, 00:16:35.759 "claimed": false, 00:16:35.759 "zoned": false, 00:16:35.759 "supported_io_types": { 00:16:35.759 "read": true, 00:16:35.759 "write": true, 00:16:35.759 "unmap": true, 00:16:35.759 "flush": false, 00:16:35.759 "reset": true, 00:16:35.759 "nvme_admin": false, 00:16:35.759 "nvme_io": false, 00:16:35.759 "nvme_io_md": false, 00:16:35.759 "write_zeroes": true, 00:16:35.759 "zcopy": false, 00:16:35.759 "get_zone_info": false, 00:16:35.759 "zone_management": false, 00:16:35.759 "zone_append": false, 00:16:35.759 "compare": false, 00:16:35.759 "compare_and_write": false, 00:16:35.759 "abort": false, 00:16:35.759 "seek_hole": true, 00:16:35.759 "seek_data": true, 00:16:35.759 "copy": false, 00:16:35.759 "nvme_iov_md": false 00:16:35.759 }, 00:16:35.759 "driver_specific": { 00:16:35.759 "lvol": { 00:16:35.759 "lvol_store_uuid": "b000bb5f-0b3c-41d8-980f-4d958a3a2bbf", 00:16:35.759 "base_bdev": "nvme0n1", 00:16:35.759 "thin_provision": true, 00:16:35.759 "num_allocated_clusters": 0, 00:16:35.759 "snapshot": false, 00:16:35.759 "clone": false, 00:16:35.759 "esnap_clone": false 00:16:35.759 } 00:16:35.759 } 00:16:35.759 } 00:16:35.759 ]' 00:16:35.759 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:35.759 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:35.759 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:35.759 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:35.759 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:35.759 23:37:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:35.759 23:37:23 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:35.759 23:37:23 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:35.759 23:37:23 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:36.017 23:37:24 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:36.017 23:37:24 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:36.017 23:37:24 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:36.017 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:36.017 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:36.017 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:36.017 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:36.017 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:36.275 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:36.275 { 00:16:36.275 "name": "05204f42-8aaf-4916-b3f0-59f181bbf21d", 00:16:36.275 "aliases": [ 00:16:36.275 "lvs/nvme0n1p0" 00:16:36.275 ], 00:16:36.275 "product_name": "Logical Volume", 00:16:36.275 "block_size": 4096, 00:16:36.275 "num_blocks": 26476544, 00:16:36.275 "uuid": "05204f42-8aaf-4916-b3f0-59f181bbf21d", 00:16:36.275 "assigned_rate_limits": { 00:16:36.275 "rw_ios_per_sec": 0, 00:16:36.275 "rw_mbytes_per_sec": 0, 00:16:36.275 "r_mbytes_per_sec": 0, 00:16:36.275 "w_mbytes_per_sec": 0 00:16:36.275 }, 00:16:36.275 "claimed": false, 00:16:36.275 "zoned": false, 00:16:36.275 "supported_io_types": { 00:16:36.275 "read": true, 00:16:36.275 "write": true, 00:16:36.275 "unmap": true, 00:16:36.275 "flush": false, 00:16:36.275 "reset": true, 00:16:36.275 "nvme_admin": false, 00:16:36.275 "nvme_io": false, 00:16:36.275 "nvme_io_md": false, 00:16:36.275 "write_zeroes": true, 00:16:36.275 "zcopy": false, 00:16:36.275 "get_zone_info": false, 00:16:36.275 "zone_management": false, 00:16:36.275 "zone_append": false, 00:16:36.275 "compare": false, 00:16:36.275 "compare_and_write": false, 00:16:36.275 "abort": false, 00:16:36.275 "seek_hole": true, 00:16:36.275 "seek_data": true, 00:16:36.275 "copy": false, 00:16:36.275 "nvme_iov_md": false 00:16:36.275 }, 00:16:36.275 "driver_specific": { 00:16:36.275 "lvol": { 00:16:36.275 "lvol_store_uuid": "b000bb5f-0b3c-41d8-980f-4d958a3a2bbf", 00:16:36.275 "base_bdev": "nvme0n1", 00:16:36.275 "thin_provision": true, 00:16:36.275 "num_allocated_clusters": 0, 00:16:36.275 "snapshot": false, 00:16:36.275 "clone": false, 00:16:36.275 "esnap_clone": false 00:16:36.275 } 00:16:36.275 } 00:16:36.275 } 00:16:36.275 ]' 00:16:36.275 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:36.275 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:36.275 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:36.275 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:36.275 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:36.275 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:36.275 23:37:24 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:36.275 23:37:24 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:36.534 23:37:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:36.534 23:37:24 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:36.534 23:37:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:36.534 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:36.534 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:36.534 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:36.534 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:36.534 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05204f42-8aaf-4916-b3f0-59f181bbf21d 00:16:36.791 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:36.791 { 00:16:36.791 "name": "05204f42-8aaf-4916-b3f0-59f181bbf21d", 00:16:36.791 "aliases": [ 00:16:36.791 "lvs/nvme0n1p0" 00:16:36.791 ], 00:16:36.791 "product_name": "Logical Volume", 00:16:36.791 "block_size": 4096, 00:16:36.791 "num_blocks": 26476544, 00:16:36.791 "uuid": "05204f42-8aaf-4916-b3f0-59f181bbf21d", 00:16:36.791 "assigned_rate_limits": { 00:16:36.791 "rw_ios_per_sec": 0, 00:16:36.791 "rw_mbytes_per_sec": 0, 00:16:36.791 "r_mbytes_per_sec": 0, 00:16:36.791 "w_mbytes_per_sec": 0 00:16:36.791 }, 00:16:36.791 "claimed": false, 00:16:36.791 "zoned": false, 00:16:36.791 "supported_io_types": { 00:16:36.791 "read": true, 00:16:36.791 "write": true, 00:16:36.791 "unmap": true, 00:16:36.791 "flush": false, 00:16:36.791 "reset": true, 00:16:36.791 "nvme_admin": false, 00:16:36.791 "nvme_io": false, 00:16:36.791 "nvme_io_md": false, 00:16:36.791 "write_zeroes": true, 00:16:36.791 "zcopy": false, 00:16:36.791 "get_zone_info": false, 00:16:36.791 "zone_management": false, 00:16:36.791 "zone_append": false, 00:16:36.791 "compare": false, 00:16:36.791 "compare_and_write": false, 00:16:36.791 "abort": false, 00:16:36.791 "seek_hole": true, 00:16:36.791 "seek_data": true, 00:16:36.791 "copy": false, 00:16:36.791 "nvme_iov_md": false 00:16:36.791 }, 00:16:36.791 "driver_specific": { 00:16:36.791 "lvol": { 00:16:36.791 "lvol_store_uuid": "b000bb5f-0b3c-41d8-980f-4d958a3a2bbf", 00:16:36.791 "base_bdev": "nvme0n1", 00:16:36.791 "thin_provision": true, 00:16:36.791 "num_allocated_clusters": 0, 00:16:36.791 "snapshot": false, 00:16:36.791 "clone": false, 00:16:36.791 "esnap_clone": false 00:16:36.791 } 00:16:36.791 } 00:16:36.791 } 00:16:36.791 ]' 00:16:36.791 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:36.791 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:36.791 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:36.791 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:36.791 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:36.791 23:37:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:36.791 23:37:24 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:36.791 23:37:24 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 05204f42-8aaf-4916-b3f0-59f181bbf21d -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:37.050 [2024-09-28 23:37:25.055268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.055463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:37.050 [2024-09-28 23:37:25.055553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:37.050 [2024-09-28 23:37:25.055574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.057771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.057864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.050 [2024-09-28 23:37:25.057879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:16:37.050 [2024-09-28 23:37:25.057885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.057966] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:37.050 [2024-09-28 23:37:25.058563] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:37.050 [2024-09-28 23:37:25.058584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.058591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.050 [2024-09-28 23:37:25.058599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:16:37.050 [2024-09-28 23:37:25.058606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.058695] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 654be1da-fb74-4145-8224-07c374972a2a 00:16:37.050 [2024-09-28 23:37:25.059619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.059647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:37.050 [2024-09-28 23:37:25.059656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:37.050 [2024-09-28 23:37:25.059663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.064333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.064360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.050 [2024-09-28 23:37:25.064367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.613 ms 00:16:37.050 [2024-09-28 23:37:25.064375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.064470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.064483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.050 [2024-09-28 23:37:25.064490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:37.050 [2024-09-28 23:37:25.064501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.064536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.064544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:37.050 [2024-09-28 23:37:25.064550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:37.050 [2024-09-28 23:37:25.064558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.064582] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:37.050 [2024-09-28 23:37:25.067404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.067430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.050 [2024-09-28 23:37:25.067438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:16:37.050 [2024-09-28 23:37:25.067444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.067478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.067485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:37.050 [2024-09-28 23:37:25.067493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:37.050 [2024-09-28 23:37:25.067500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.067540] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:37.050 [2024-09-28 23:37:25.067641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:37.050 [2024-09-28 23:37:25.067654] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:37.050 [2024-09-28 23:37:25.067673] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:37.050 [2024-09-28 23:37:25.067684] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:37.050 [2024-09-28 23:37:25.067691] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:37.050 [2024-09-28 23:37:25.067698] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:37.050 [2024-09-28 23:37:25.067704] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:37.050 [2024-09-28 23:37:25.067711] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:37.050 [2024-09-28 23:37:25.067716] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:37.050 [2024-09-28 23:37:25.067724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.067729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:37.050 [2024-09-28 23:37:25.067736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:16:37.050 [2024-09-28 23:37:25.067741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.067812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.050 [2024-09-28 23:37:25.067821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:37.050 [2024-09-28 23:37:25.067828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:37.050 [2024-09-28 23:37:25.067833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.050 [2024-09-28 23:37:25.067927] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:37.050 [2024-09-28 23:37:25.067935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:37.050 [2024-09-28 23:37:25.067942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:37.050 [2024-09-28 23:37:25.067948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.050 [2024-09-28 23:37:25.067955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:37.050 [2024-09-28 23:37:25.067960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:37.050 [2024-09-28 23:37:25.067967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:37.050 [2024-09-28 23:37:25.067972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:37.050 [2024-09-28 23:37:25.067978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:37.050 [2024-09-28 23:37:25.067983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:37.050 [2024-09-28 23:37:25.067989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:37.050 [2024-09-28 23:37:25.067994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:37.050 [2024-09-28 23:37:25.068000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:37.050 [2024-09-28 23:37:25.068006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:37.050 [2024-09-28 23:37:25.068012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:37.050 [2024-09-28 23:37:25.068017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.050 [2024-09-28 23:37:25.068024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:37.050 [2024-09-28 23:37:25.068029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:37.050 [2024-09-28 23:37:25.068035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.050 [2024-09-28 23:37:25.068043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:37.050 [2024-09-28 23:37:25.068050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:37.050 [2024-09-28 23:37:25.068055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.050 [2024-09-28 23:37:25.068063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:37.050 [2024-09-28 23:37:25.068068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:37.050 [2024-09-28 23:37:25.068074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.050 [2024-09-28 23:37:25.068079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:37.050 [2024-09-28 23:37:25.068085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:37.050 [2024-09-28 23:37:25.068089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.050 [2024-09-28 23:37:25.068095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:37.050 [2024-09-28 23:37:25.068100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:37.050 [2024-09-28 23:37:25.068107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.050 [2024-09-28 23:37:25.068112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:37.050 [2024-09-28 23:37:25.068119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:37.050 [2024-09-28 23:37:25.068124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:37.050 [2024-09-28 23:37:25.068131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:37.050 [2024-09-28 23:37:25.068135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:37.050 [2024-09-28 23:37:25.068142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:37.050 [2024-09-28 23:37:25.068147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:37.050 [2024-09-28 23:37:25.068153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:37.050 [2024-09-28 23:37:25.068158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.050 [2024-09-28 23:37:25.068164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:37.050 [2024-09-28 23:37:25.068169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:37.050 [2024-09-28 23:37:25.068175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.050 [2024-09-28 23:37:25.068180] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:37.050 [2024-09-28 23:37:25.068187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:37.050 [2024-09-28 23:37:25.068193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:37.050 [2024-09-28 23:37:25.068200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.050 [2024-09-28 23:37:25.068206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:37.050 [2024-09-28 23:37:25.068215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:37.050 [2024-09-28 23:37:25.068220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:37.050 [2024-09-28 23:37:25.068226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:37.050 [2024-09-28 23:37:25.068234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:37.050 [2024-09-28 23:37:25.068240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:37.050 [2024-09-28 23:37:25.068248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:37.050 [2024-09-28 23:37:25.068258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:37.050 [2024-09-28 23:37:25.068265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:37.050 [2024-09-28 23:37:25.068271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:37.050 [2024-09-28 23:37:25.068277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:37.050 [2024-09-28 23:37:25.068283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:37.050 [2024-09-28 23:37:25.068289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:37.051 [2024-09-28 23:37:25.068295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:37.051 [2024-09-28 23:37:25.068301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:37.051 [2024-09-28 23:37:25.068307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:37.051 [2024-09-28 23:37:25.068314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:37.051 [2024-09-28 23:37:25.068322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:37.051 [2024-09-28 23:37:25.068327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:37.051 [2024-09-28 23:37:25.068334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:37.051 [2024-09-28 23:37:25.068339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:37.051 [2024-09-28 23:37:25.068346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:37.051 [2024-09-28 23:37:25.068352] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:37.051 [2024-09-28 23:37:25.068360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:37.051 [2024-09-28 23:37:25.068366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:37.051 [2024-09-28 23:37:25.068373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:37.051 [2024-09-28 23:37:25.068379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:37.051 [2024-09-28 23:37:25.068385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:37.051 [2024-09-28 23:37:25.068391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.051 [2024-09-28 23:37:25.068398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:37.051 [2024-09-28 23:37:25.068403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:16:37.051 [2024-09-28 23:37:25.068410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.051 [2024-09-28 23:37:25.068486] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:37.051 [2024-09-28 23:37:25.068504] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:39.578 [2024-09-28 23:37:27.356976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.578 [2024-09-28 23:37:27.357190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:39.578 [2024-09-28 23:37:27.357211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2288.483 ms 00:16:39.578 [2024-09-28 23:37:27.357221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.578 [2024-09-28 23:37:27.389222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.578 [2024-09-28 23:37:27.389267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:39.578 [2024-09-28 23:37:27.389280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.765 ms 00:16:39.578 [2024-09-28 23:37:27.389290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.578 [2024-09-28 23:37:27.389430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.578 [2024-09-28 23:37:27.389444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:39.578 [2024-09-28 23:37:27.389452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:39.578 [2024-09-28 23:37:27.389463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.578 [2024-09-28 23:37:27.421730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.578 [2024-09-28 23:37:27.421762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:39.578 [2024-09-28 23:37:27.421772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.242 ms 00:16:39.578 [2024-09-28 23:37:27.421781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.578 [2024-09-28 23:37:27.421852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.578 [2024-09-28 23:37:27.421867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:39.578 [2024-09-28 23:37:27.421876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:39.578 [2024-09-28 23:37:27.421884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.578 [2024-09-28 23:37:27.422180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.578 [2024-09-28 23:37:27.422196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:39.579 [2024-09-28 23:37:27.422204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:39.579 [2024-09-28 23:37:27.422213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.422331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.422341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:39.579 [2024-09-28 23:37:27.422351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:39.579 [2024-09-28 23:37:27.422362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.436260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.436291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:39.579 [2024-09-28 23:37:27.436302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.869 ms 00:16:39.579 [2024-09-28 23:37:27.436311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.447469] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:39.579 [2024-09-28 23:37:27.461082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.461112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:39.579 [2024-09-28 23:37:27.461125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.686 ms 00:16:39.579 [2024-09-28 23:37:27.461132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.529582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.529741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:39.579 [2024-09-28 23:37:27.529765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.383 ms 00:16:39.579 [2024-09-28 23:37:27.529773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.530001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.530013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:39.579 [2024-09-28 23:37:27.530028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:39.579 [2024-09-28 23:37:27.530035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.552529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.552664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:39.579 [2024-09-28 23:37:27.552684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.463 ms 00:16:39.579 [2024-09-28 23:37:27.552692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.574897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.574926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:39.579 [2024-09-28 23:37:27.574938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.151 ms 00:16:39.579 [2024-09-28 23:37:27.574945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.575530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.575546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:39.579 [2024-09-28 23:37:27.575556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:16:39.579 [2024-09-28 23:37:27.575564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.642704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.642735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:39.579 [2024-09-28 23:37:27.642750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.104 ms 00:16:39.579 [2024-09-28 23:37:27.642758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.666707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.666740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:39.579 [2024-09-28 23:37:27.666753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.860 ms 00:16:39.579 [2024-09-28 23:37:27.666760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.689527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.689558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:39.579 [2024-09-28 23:37:27.689570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.711 ms 00:16:39.579 [2024-09-28 23:37:27.689577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.712639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.712670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:39.579 [2024-09-28 23:37:27.712682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.995 ms 00:16:39.579 [2024-09-28 23:37:27.712689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.712747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.712757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:39.579 [2024-09-28 23:37:27.712770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:39.579 [2024-09-28 23:37:27.712789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.712861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.579 [2024-09-28 23:37:27.712870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:39.579 [2024-09-28 23:37:27.712882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:39.579 [2024-09-28 23:37:27.712889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.579 [2024-09-28 23:37:27.713623] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:39.579 [2024-09-28 23:37:27.716524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2658.060 ms, result 0 00:16:39.579 [2024-09-28 23:37:27.717213] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:39.579 { 00:16:39.579 "name": "ftl0", 00:16:39.579 "uuid": "654be1da-fb74-4145-8224-07c374972a2a" 00:16:39.579 } 00:16:39.579 23:37:27 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:39.579 23:37:27 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:39.579 23:37:27 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:39.579 23:37:27 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:39.579 23:37:27 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:39.579 23:37:27 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:39.579 23:37:27 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:39.836 23:37:27 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:40.093 [ 00:16:40.093 { 00:16:40.093 "name": "ftl0", 00:16:40.093 "aliases": [ 00:16:40.093 "654be1da-fb74-4145-8224-07c374972a2a" 00:16:40.093 ], 00:16:40.093 "product_name": "FTL disk", 00:16:40.093 "block_size": 4096, 00:16:40.093 "num_blocks": 23592960, 00:16:40.093 "uuid": "654be1da-fb74-4145-8224-07c374972a2a", 00:16:40.093 "assigned_rate_limits": { 00:16:40.093 "rw_ios_per_sec": 0, 00:16:40.093 "rw_mbytes_per_sec": 0, 00:16:40.093 "r_mbytes_per_sec": 0, 00:16:40.093 "w_mbytes_per_sec": 0 00:16:40.093 }, 00:16:40.093 "claimed": false, 00:16:40.093 "zoned": false, 00:16:40.093 "supported_io_types": { 00:16:40.093 "read": true, 00:16:40.093 "write": true, 00:16:40.093 "unmap": true, 00:16:40.093 "flush": true, 00:16:40.093 "reset": false, 00:16:40.093 "nvme_admin": false, 00:16:40.093 "nvme_io": false, 00:16:40.093 "nvme_io_md": false, 00:16:40.093 "write_zeroes": true, 00:16:40.093 "zcopy": false, 00:16:40.093 "get_zone_info": false, 00:16:40.093 "zone_management": false, 00:16:40.093 "zone_append": false, 00:16:40.093 "compare": false, 00:16:40.093 "compare_and_write": false, 00:16:40.093 "abort": false, 00:16:40.093 "seek_hole": false, 00:16:40.093 "seek_data": false, 00:16:40.093 "copy": false, 00:16:40.093 "nvme_iov_md": false 00:16:40.093 }, 00:16:40.093 "driver_specific": { 00:16:40.093 "ftl": { 00:16:40.093 "base_bdev": "05204f42-8aaf-4916-b3f0-59f181bbf21d", 00:16:40.093 "cache": "nvc0n1p0" 00:16:40.093 } 00:16:40.093 } 00:16:40.093 } 00:16:40.093 ] 00:16:40.094 23:37:28 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:40.094 23:37:28 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:40.094 23:37:28 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:40.350 23:37:28 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:40.350 23:37:28 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:40.607 23:37:28 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:40.607 { 00:16:40.607 "name": "ftl0", 00:16:40.607 "aliases": [ 00:16:40.607 "654be1da-fb74-4145-8224-07c374972a2a" 00:16:40.607 ], 00:16:40.607 "product_name": "FTL disk", 00:16:40.607 "block_size": 4096, 00:16:40.607 "num_blocks": 23592960, 00:16:40.607 "uuid": "654be1da-fb74-4145-8224-07c374972a2a", 00:16:40.607 "assigned_rate_limits": { 00:16:40.607 "rw_ios_per_sec": 0, 00:16:40.607 "rw_mbytes_per_sec": 0, 00:16:40.607 "r_mbytes_per_sec": 0, 00:16:40.608 "w_mbytes_per_sec": 0 00:16:40.608 }, 00:16:40.608 "claimed": false, 00:16:40.608 "zoned": false, 00:16:40.608 "supported_io_types": { 00:16:40.608 "read": true, 00:16:40.608 "write": true, 00:16:40.608 "unmap": true, 00:16:40.608 "flush": true, 00:16:40.608 "reset": false, 00:16:40.608 "nvme_admin": false, 00:16:40.608 "nvme_io": false, 00:16:40.608 "nvme_io_md": false, 00:16:40.608 "write_zeroes": true, 00:16:40.608 "zcopy": false, 00:16:40.608 "get_zone_info": false, 00:16:40.608 "zone_management": false, 00:16:40.608 "zone_append": false, 00:16:40.608 "compare": false, 00:16:40.608 "compare_and_write": false, 00:16:40.608 "abort": false, 00:16:40.608 "seek_hole": false, 00:16:40.608 "seek_data": false, 00:16:40.608 "copy": false, 00:16:40.608 "nvme_iov_md": false 00:16:40.608 }, 00:16:40.608 "driver_specific": { 00:16:40.608 "ftl": { 00:16:40.608 "base_bdev": "05204f42-8aaf-4916-b3f0-59f181bbf21d", 00:16:40.608 "cache": "nvc0n1p0" 00:16:40.608 } 00:16:40.608 } 00:16:40.608 } 00:16:40.608 ]' 00:16:40.608 23:37:28 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:40.608 23:37:28 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:40.608 23:37:28 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:40.608 [2024-09-28 23:37:28.739993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.608 [2024-09-28 23:37:28.740160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:40.608 [2024-09-28 23:37:28.740176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:40.608 [2024-09-28 23:37:28.740184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.608 [2024-09-28 23:37:28.740212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:40.608 [2024-09-28 23:37:28.742297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.608 [2024-09-28 23:37:28.742324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:40.608 [2024-09-28 23:37:28.742335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:16:40.608 [2024-09-28 23:37:28.742342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.608 [2024-09-28 23:37:28.742724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.608 [2024-09-28 23:37:28.742747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:40.608 [2024-09-28 23:37:28.742756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:16:40.608 [2024-09-28 23:37:28.742761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.608 [2024-09-28 23:37:28.745481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.608 [2024-09-28 23:37:28.745497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:40.608 [2024-09-28 23:37:28.745506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.700 ms 00:16:40.608 [2024-09-28 23:37:28.745522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.608 [2024-09-28 23:37:28.750815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.608 [2024-09-28 23:37:28.750907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:40.608 [2024-09-28 23:37:28.750923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.259 ms 00:16:40.608 [2024-09-28 23:37:28.750929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.608 [2024-09-28 23:37:28.769151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.608 [2024-09-28 23:37:28.769178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:40.608 [2024-09-28 23:37:28.769190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.149 ms 00:16:40.608 [2024-09-28 23:37:28.769195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.867 [2024-09-28 23:37:28.781273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.867 [2024-09-28 23:37:28.781376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:40.867 [2024-09-28 23:37:28.781393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.030 ms 00:16:40.867 [2024-09-28 23:37:28.781399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.867 [2024-09-28 23:37:28.781555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.867 [2024-09-28 23:37:28.781564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:40.867 [2024-09-28 23:37:28.781572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:16:40.867 [2024-09-28 23:37:28.781578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.867 [2024-09-28 23:37:28.799255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.867 [2024-09-28 23:37:28.799290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:40.867 [2024-09-28 23:37:28.799300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.650 ms 00:16:40.867 [2024-09-28 23:37:28.799305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.867 [2024-09-28 23:37:28.818672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.867 [2024-09-28 23:37:28.818758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:40.867 [2024-09-28 23:37:28.818801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.807 ms 00:16:40.867 [2024-09-28 23:37:28.818819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.867 [2024-09-28 23:37:28.835455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.867 [2024-09-28 23:37:28.835554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:40.867 [2024-09-28 23:37:28.835595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.583 ms 00:16:40.867 [2024-09-28 23:37:28.835612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.867 [2024-09-28 23:37:28.852840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.867 [2024-09-28 23:37:28.852924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:40.867 [2024-09-28 23:37:28.852963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.135 ms 00:16:40.867 [2024-09-28 23:37:28.852979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.867 [2024-09-28 23:37:28.853028] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:40.867 [2024-09-28 23:37:28.853051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:40.867 [2024-09-28 23:37:28.853941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.853998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.854996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:40.868 [2024-09-28 23:37:28.855076] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:40.868 [2024-09-28 23:37:28.855085] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 654be1da-fb74-4145-8224-07c374972a2a 00:16:40.868 [2024-09-28 23:37:28.855091] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:40.868 [2024-09-28 23:37:28.855098] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:40.869 [2024-09-28 23:37:28.855103] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:40.869 [2024-09-28 23:37:28.855111] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:40.869 [2024-09-28 23:37:28.855116] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:40.869 [2024-09-28 23:37:28.855122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:40.869 [2024-09-28 23:37:28.855128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:40.869 [2024-09-28 23:37:28.855134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:40.869 [2024-09-28 23:37:28.855139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:40.869 [2024-09-28 23:37:28.855146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.869 [2024-09-28 23:37:28.855152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:40.869 [2024-09-28 23:37:28.855160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.120 ms 00:16:40.869 [2024-09-28 23:37:28.855167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:28.864494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.869 [2024-09-28 23:37:28.864529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:40.869 [2024-09-28 23:37:28.864540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.300 ms 00:16:40.869 [2024-09-28 23:37:28.864546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:28.864846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.869 [2024-09-28 23:37:28.864857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:40.869 [2024-09-28 23:37:28.864866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:16:40.869 [2024-09-28 23:37:28.864872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:28.898974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:28.899000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.869 [2024-09-28 23:37:28.899010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:28.899016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:28.899091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:28.899098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.869 [2024-09-28 23:37:28.899107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:28.899113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:28.899162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:28.899168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.869 [2024-09-28 23:37:28.899177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:28.899183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:28.899205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:28.899211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.869 [2024-09-28 23:37:28.899218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:28.899225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:28.961420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:28.961455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.869 [2024-09-28 23:37:28.961464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:28.961471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:29.009618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:29.009653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.869 [2024-09-28 23:37:29.009663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:29.009671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:29.009728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:29.009736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.869 [2024-09-28 23:37:29.009745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:29.009751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:29.009789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:29.009795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.869 [2024-09-28 23:37:29.009813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:29.009819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:29.009899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:29.009907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.869 [2024-09-28 23:37:29.009915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:29.009920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:29.009962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:29.009969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:40.869 [2024-09-28 23:37:29.009977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:29.009982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:29.010023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:29.010030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.869 [2024-09-28 23:37:29.010038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:29.010044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:29.010086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.869 [2024-09-28 23:37:29.010095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.869 [2024-09-28 23:37:29.010103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.869 [2024-09-28 23:37:29.010108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.869 [2024-09-28 23:37:29.010254] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.237 ms, result 0 00:16:40.869 true 00:16:41.126 23:37:29 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 74010 00:16:41.126 23:37:29 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74010 ']' 00:16:41.126 23:37:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74010 00:16:41.126 23:37:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:41.126 23:37:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.126 23:37:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74010 00:16:41.126 killing process with pid 74010 00:16:41.126 23:37:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.126 23:37:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.126 23:37:29 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74010' 00:16:41.126 23:37:29 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74010 00:16:41.126 23:37:29 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74010 00:16:47.679 23:37:35 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:47.939 65536+0 records in 00:16:47.939 65536+0 records out 00:16:47.939 268435456 bytes (268 MB, 256 MiB) copied, 0.80291 s, 334 MB/s 00:16:47.939 23:37:35 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:47.939 [2024-09-28 23:37:35.979565] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:47.939 [2024-09-28 23:37:35.979686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74192 ] 00:16:48.198 [2024-09-28 23:37:36.127003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.198 [2024-09-28 23:37:36.267459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.456 [2024-09-28 23:37:36.471957] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:48.456 [2024-09-28 23:37:36.472001] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:48.456 [2024-09-28 23:37:36.619591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.456 [2024-09-28 23:37:36.619630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:48.456 [2024-09-28 23:37:36.619643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:48.456 [2024-09-28 23:37:36.619649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.456 [2024-09-28 23:37:36.621690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.456 [2024-09-28 23:37:36.621717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:48.456 [2024-09-28 23:37:36.621725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.029 ms 00:16:48.456 [2024-09-28 23:37:36.621733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.456 [2024-09-28 23:37:36.621787] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:48.456 [2024-09-28 23:37:36.622328] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:48.456 [2024-09-28 23:37:36.622349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.456 [2024-09-28 23:37:36.622357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:48.456 [2024-09-28 23:37:36.622364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:16:48.456 [2024-09-28 23:37:36.622370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.716 [2024-09-28 23:37:36.623375] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:48.716 [2024-09-28 23:37:36.632937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.716 [2024-09-28 23:37:36.632963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:48.716 [2024-09-28 23:37:36.632972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.563 ms 00:16:48.716 [2024-09-28 23:37:36.632978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.716 [2024-09-28 23:37:36.633042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.716 [2024-09-28 23:37:36.633050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:48.716 [2024-09-28 23:37:36.633058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:48.716 [2024-09-28 23:37:36.633064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.716 [2024-09-28 23:37:36.637381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.716 [2024-09-28 23:37:36.637407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:48.716 [2024-09-28 23:37:36.637414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.288 ms 00:16:48.716 [2024-09-28 23:37:36.637421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.716 [2024-09-28 23:37:36.637495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.716 [2024-09-28 23:37:36.637519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:48.716 [2024-09-28 23:37:36.637527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:48.716 [2024-09-28 23:37:36.637532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.716 [2024-09-28 23:37:36.637555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.716 [2024-09-28 23:37:36.637563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:48.716 [2024-09-28 23:37:36.637569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:48.716 [2024-09-28 23:37:36.637574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.716 [2024-09-28 23:37:36.637591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:48.716 [2024-09-28 23:37:36.640316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.716 [2024-09-28 23:37:36.640339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:48.716 [2024-09-28 23:37:36.640346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.729 ms 00:16:48.716 [2024-09-28 23:37:36.640353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.716 [2024-09-28 23:37:36.640382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.716 [2024-09-28 23:37:36.640391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:48.716 [2024-09-28 23:37:36.640397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:48.716 [2024-09-28 23:37:36.640403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.716 [2024-09-28 23:37:36.640416] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:48.716 [2024-09-28 23:37:36.640430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:48.716 [2024-09-28 23:37:36.640463] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:48.716 [2024-09-28 23:37:36.640479] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:48.716 [2024-09-28 23:37:36.640567] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:48.716 [2024-09-28 23:37:36.640580] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:48.716 [2024-09-28 23:37:36.640588] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:48.716 [2024-09-28 23:37:36.640596] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:48.716 [2024-09-28 23:37:36.640602] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:48.716 [2024-09-28 23:37:36.640609] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:48.716 [2024-09-28 23:37:36.640614] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:48.716 [2024-09-28 23:37:36.640620] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:48.716 [2024-09-28 23:37:36.640626] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:48.716 [2024-09-28 23:37:36.640632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.716 [2024-09-28 23:37:36.640639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:48.716 [2024-09-28 23:37:36.640646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:16:48.716 [2024-09-28 23:37:36.640651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.716 [2024-09-28 23:37:36.640718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.716 [2024-09-28 23:37:36.640725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:48.716 [2024-09-28 23:37:36.640730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:48.716 [2024-09-28 23:37:36.640736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.716 [2024-09-28 23:37:36.640807] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:48.716 [2024-09-28 23:37:36.640822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:48.716 [2024-09-28 23:37:36.640831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.716 [2024-09-28 23:37:36.640837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.716 [2024-09-28 23:37:36.640843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:48.716 [2024-09-28 23:37:36.640848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:48.716 [2024-09-28 23:37:36.640854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:48.716 [2024-09-28 23:37:36.640859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:48.716 [2024-09-28 23:37:36.640864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:48.716 [2024-09-28 23:37:36.640869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.716 [2024-09-28 23:37:36.640875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:48.716 [2024-09-28 23:37:36.640884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:48.716 [2024-09-28 23:37:36.640889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.716 [2024-09-28 23:37:36.640894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:48.716 [2024-09-28 23:37:36.640899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:48.716 [2024-09-28 23:37:36.640907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.716 [2024-09-28 23:37:36.640912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:48.716 [2024-09-28 23:37:36.640917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:48.716 [2024-09-28 23:37:36.640922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.716 [2024-09-28 23:37:36.640927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:48.716 [2024-09-28 23:37:36.640932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:48.716 [2024-09-28 23:37:36.640937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.716 [2024-09-28 23:37:36.640942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:48.716 [2024-09-28 23:37:36.640947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:48.716 [2024-09-28 23:37:36.640952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.716 [2024-09-28 23:37:36.640957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:48.716 [2024-09-28 23:37:36.640962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:48.716 [2024-09-28 23:37:36.640967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.716 [2024-09-28 23:37:36.640972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:48.716 [2024-09-28 23:37:36.640978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:48.717 [2024-09-28 23:37:36.640982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.717 [2024-09-28 23:37:36.640988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:48.717 [2024-09-28 23:37:36.640993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:48.717 [2024-09-28 23:37:36.640997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.717 [2024-09-28 23:37:36.641002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:48.717 [2024-09-28 23:37:36.641008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:48.717 [2024-09-28 23:37:36.641012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.717 [2024-09-28 23:37:36.641018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:48.717 [2024-09-28 23:37:36.641023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:48.717 [2024-09-28 23:37:36.641028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.717 [2024-09-28 23:37:36.641033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:48.717 [2024-09-28 23:37:36.641038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:48.717 [2024-09-28 23:37:36.641043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.717 [2024-09-28 23:37:36.641047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:48.717 [2024-09-28 23:37:36.641053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:48.717 [2024-09-28 23:37:36.641059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.717 [2024-09-28 23:37:36.641064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.717 [2024-09-28 23:37:36.641071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:48.717 [2024-09-28 23:37:36.641077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:48.717 [2024-09-28 23:37:36.641082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:48.717 [2024-09-28 23:37:36.641087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:48.717 [2024-09-28 23:37:36.641091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:48.717 [2024-09-28 23:37:36.641096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:48.717 [2024-09-28 23:37:36.641103] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:48.717 [2024-09-28 23:37:36.641113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.717 [2024-09-28 23:37:36.641119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:48.717 [2024-09-28 23:37:36.641124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:48.717 [2024-09-28 23:37:36.641130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:48.717 [2024-09-28 23:37:36.641135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:48.717 [2024-09-28 23:37:36.641140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:48.717 [2024-09-28 23:37:36.641146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:48.717 [2024-09-28 23:37:36.641151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:48.717 [2024-09-28 23:37:36.641156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:48.717 [2024-09-28 23:37:36.641162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:48.717 [2024-09-28 23:37:36.641167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:48.717 [2024-09-28 23:37:36.641172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:48.717 [2024-09-28 23:37:36.641177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:48.717 [2024-09-28 23:37:36.641183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:48.717 [2024-09-28 23:37:36.641189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:48.717 [2024-09-28 23:37:36.641194] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:48.717 [2024-09-28 23:37:36.641201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.717 [2024-09-28 23:37:36.641207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:48.717 [2024-09-28 23:37:36.641212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:48.717 [2024-09-28 23:37:36.641218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:48.717 [2024-09-28 23:37:36.641223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:48.717 [2024-09-28 23:37:36.641228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.641235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:48.717 [2024-09-28 23:37:36.641241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:16:48.717 [2024-09-28 23:37:36.641246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.673560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.673612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:48.717 [2024-09-28 23:37:36.673629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.273 ms 00:16:48.717 [2024-09-28 23:37:36.673640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.673817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.673842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:48.717 [2024-09-28 23:37:36.673856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:48.717 [2024-09-28 23:37:36.673866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.697664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.697693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:48.717 [2024-09-28 23:37:36.697700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.770 ms 00:16:48.717 [2024-09-28 23:37:36.697706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.697751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.697759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:48.717 [2024-09-28 23:37:36.697765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:48.717 [2024-09-28 23:37:36.697771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.698057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.698077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:48.717 [2024-09-28 23:37:36.698084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:16:48.717 [2024-09-28 23:37:36.698090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.698189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.698197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:48.717 [2024-09-28 23:37:36.698203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:16:48.717 [2024-09-28 23:37:36.698208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.708444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.708471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:48.717 [2024-09-28 23:37:36.708479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.220 ms 00:16:48.717 [2024-09-28 23:37:36.708485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.718008] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:48.717 [2024-09-28 23:37:36.718037] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:48.717 [2024-09-28 23:37:36.718048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.718054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:48.717 [2024-09-28 23:37:36.718061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.471 ms 00:16:48.717 [2024-09-28 23:37:36.718067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.736740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.736768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:48.717 [2024-09-28 23:37:36.736777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.629 ms 00:16:48.717 [2024-09-28 23:37:36.736787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.745615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.745641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:48.717 [2024-09-28 23:37:36.745649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.777 ms 00:16:48.717 [2024-09-28 23:37:36.745655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.754248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.754278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:48.717 [2024-09-28 23:37:36.754286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.553 ms 00:16:48.717 [2024-09-28 23:37:36.754292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.754767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.754786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:48.717 [2024-09-28 23:37:36.754794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:16:48.717 [2024-09-28 23:37:36.754800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.717 [2024-09-28 23:37:36.798003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.717 [2024-09-28 23:37:36.798048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:48.717 [2024-09-28 23:37:36.798059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.185 ms 00:16:48.717 [2024-09-28 23:37:36.798065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.718 [2024-09-28 23:37:36.805967] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:48.718 [2024-09-28 23:37:36.817394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.718 [2024-09-28 23:37:36.817429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:48.718 [2024-09-28 23:37:36.817439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.255 ms 00:16:48.718 [2024-09-28 23:37:36.817445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.718 [2024-09-28 23:37:36.817531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.718 [2024-09-28 23:37:36.817540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:48.718 [2024-09-28 23:37:36.817547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:48.718 [2024-09-28 23:37:36.817553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.718 [2024-09-28 23:37:36.817595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.718 [2024-09-28 23:37:36.817603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:48.718 [2024-09-28 23:37:36.817611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:48.718 [2024-09-28 23:37:36.817617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.718 [2024-09-28 23:37:36.817633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.718 [2024-09-28 23:37:36.817639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:48.718 [2024-09-28 23:37:36.817645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:48.718 [2024-09-28 23:37:36.817651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.718 [2024-09-28 23:37:36.817676] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:48.718 [2024-09-28 23:37:36.817684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.718 [2024-09-28 23:37:36.817690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:48.718 [2024-09-28 23:37:36.817695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:48.718 [2024-09-28 23:37:36.817703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.718 [2024-09-28 23:37:36.835560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.718 [2024-09-28 23:37:36.835587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:48.718 [2024-09-28 23:37:36.835595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.842 ms 00:16:48.718 [2024-09-28 23:37:36.835602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.718 [2024-09-28 23:37:36.835670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.718 [2024-09-28 23:37:36.835678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:48.718 [2024-09-28 23:37:36.835686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:48.718 [2024-09-28 23:37:36.835693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.718 [2024-09-28 23:37:36.836723] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:48.718 [2024-09-28 23:37:36.838969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 216.889 ms, result 0 00:16:48.718 [2024-09-28 23:37:36.839800] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:48.718 [2024-09-28 23:37:36.850393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:54.827  Copying: 42/256 [MB] (42 MBps) Copying: 85/256 [MB] (43 MBps) Copying: 129/256 [MB] (43 MBps) Copying: 175/256 [MB] (46 MBps) Copying: 219/256 [MB] (43 MBps) Copying: 256/256 [MB] (average 43 MBps)[2024-09-28 23:37:42.701313] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:54.827 [2024-09-28 23:37:42.710372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.827 [2024-09-28 23:37:42.710410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:54.827 [2024-09-28 23:37:42.710423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:54.827 [2024-09-28 23:37:42.710431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.827 [2024-09-28 23:37:42.710451] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:54.827 [2024-09-28 23:37:42.713006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.827 [2024-09-28 23:37:42.713034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:54.827 [2024-09-28 23:37:42.713044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.542 ms 00:16:54.827 [2024-09-28 23:37:42.713052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.827 [2024-09-28 23:37:42.714577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.827 [2024-09-28 23:37:42.714621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:54.827 [2024-09-28 23:37:42.714638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:16:54.828 [2024-09-28 23:37:42.714647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.828 [2024-09-28 23:37:42.721125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.828 [2024-09-28 23:37:42.721158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:54.828 [2024-09-28 23:37:42.721169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.461 ms 00:16:54.828 [2024-09-28 23:37:42.721176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.828 [2024-09-28 23:37:42.728299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.828 [2024-09-28 23:37:42.728332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:54.828 [2024-09-28 23:37:42.728342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.061 ms 00:16:54.828 [2024-09-28 23:37:42.728355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.828 [2024-09-28 23:37:42.750674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.828 [2024-09-28 23:37:42.750707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:54.828 [2024-09-28 23:37:42.750718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.275 ms 00:16:54.828 [2024-09-28 23:37:42.750724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.828 [2024-09-28 23:37:42.765065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.828 [2024-09-28 23:37:42.765097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:54.828 [2024-09-28 23:37:42.765108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.309 ms 00:16:54.828 [2024-09-28 23:37:42.765117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.828 [2024-09-28 23:37:42.765246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.828 [2024-09-28 23:37:42.765256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:54.828 [2024-09-28 23:37:42.765264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:54.828 [2024-09-28 23:37:42.765271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.828 [2024-09-28 23:37:42.788239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.828 [2024-09-28 23:37:42.788278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:54.828 [2024-09-28 23:37:42.788287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.953 ms 00:16:54.828 [2024-09-28 23:37:42.788294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.828 [2024-09-28 23:37:42.811255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.828 [2024-09-28 23:37:42.811287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:54.828 [2024-09-28 23:37:42.811298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.930 ms 00:16:54.828 [2024-09-28 23:37:42.811305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.828 [2024-09-28 23:37:42.832917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.828 [2024-09-28 23:37:42.832947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:54.828 [2024-09-28 23:37:42.832957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.581 ms 00:16:54.828 [2024-09-28 23:37:42.832964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.828 [2024-09-28 23:37:42.854700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.828 [2024-09-28 23:37:42.854730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:54.828 [2024-09-28 23:37:42.854740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.680 ms 00:16:54.828 [2024-09-28 23:37:42.854747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.828 [2024-09-28 23:37:42.854779] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:54.828 [2024-09-28 23:37:42.854793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.854999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:54.828 [2024-09-28 23:37:42.855224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:54.829 [2024-09-28 23:37:42.855564] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:54.829 [2024-09-28 23:37:42.855573] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 654be1da-fb74-4145-8224-07c374972a2a 00:16:54.829 [2024-09-28 23:37:42.855580] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:54.829 [2024-09-28 23:37:42.855588] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:54.829 [2024-09-28 23:37:42.855594] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:54.829 [2024-09-28 23:37:42.855602] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:54.829 [2024-09-28 23:37:42.855611] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:54.829 [2024-09-28 23:37:42.855619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:54.829 [2024-09-28 23:37:42.855626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:54.829 [2024-09-28 23:37:42.855632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:54.829 [2024-09-28 23:37:42.855638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:54.829 [2024-09-28 23:37:42.855644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.829 [2024-09-28 23:37:42.855651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:54.829 [2024-09-28 23:37:42.855659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:16:54.829 [2024-09-28 23:37:42.855666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.829 [2024-09-28 23:37:42.867746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.829 [2024-09-28 23:37:42.867776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:54.829 [2024-09-28 23:37:42.867790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.063 ms 00:16:54.829 [2024-09-28 23:37:42.867797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.829 [2024-09-28 23:37:42.868147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.829 [2024-09-28 23:37:42.868167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:54.829 [2024-09-28 23:37:42.868176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:16:54.829 [2024-09-28 23:37:42.868183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.829 [2024-09-28 23:37:42.898169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.829 [2024-09-28 23:37:42.898210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:54.829 [2024-09-28 23:37:42.898220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.829 [2024-09-28 23:37:42.898227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.829 [2024-09-28 23:37:42.898302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.829 [2024-09-28 23:37:42.898311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:54.829 [2024-09-28 23:37:42.898319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.829 [2024-09-28 23:37:42.898326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.829 [2024-09-28 23:37:42.898363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.829 [2024-09-28 23:37:42.898372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:54.829 [2024-09-28 23:37:42.898383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.829 [2024-09-28 23:37:42.898390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.829 [2024-09-28 23:37:42.898406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.829 [2024-09-28 23:37:42.898413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:54.829 [2024-09-28 23:37:42.898421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.829 [2024-09-28 23:37:42.898428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.829 [2024-09-28 23:37:42.975124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:54.829 [2024-09-28 23:37:42.975168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:54.829 [2024-09-28 23:37:42.975183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:54.829 [2024-09-28 23:37:42.975190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.088 [2024-09-28 23:37:43.037601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.088 [2024-09-28 23:37:43.037646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:55.088 [2024-09-28 23:37:43.037656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.088 [2024-09-28 23:37:43.037664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.088 [2024-09-28 23:37:43.037709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.088 [2024-09-28 23:37:43.037718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.088 [2024-09-28 23:37:43.037726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.089 [2024-09-28 23:37:43.037738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.089 [2024-09-28 23:37:43.037765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.089 [2024-09-28 23:37:43.037773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.089 [2024-09-28 23:37:43.037781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.089 [2024-09-28 23:37:43.037788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.089 [2024-09-28 23:37:43.037872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.089 [2024-09-28 23:37:43.037882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.089 [2024-09-28 23:37:43.037889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.089 [2024-09-28 23:37:43.037897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.089 [2024-09-28 23:37:43.037928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.089 [2024-09-28 23:37:43.037936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:55.089 [2024-09-28 23:37:43.037944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.089 [2024-09-28 23:37:43.037951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.089 [2024-09-28 23:37:43.037985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.089 [2024-09-28 23:37:43.037994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.089 [2024-09-28 23:37:43.038001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.089 [2024-09-28 23:37:43.038008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.089 [2024-09-28 23:37:43.038051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.089 [2024-09-28 23:37:43.038060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.089 [2024-09-28 23:37:43.038068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.089 [2024-09-28 23:37:43.038075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.089 [2024-09-28 23:37:43.038203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 327.825 ms, result 0 00:16:56.024 00:16:56.024 00:16:56.024 23:37:44 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:56.024 23:37:44 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74280 00:16:56.024 23:37:44 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74280 00:16:56.024 23:37:44 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74280 ']' 00:16:56.024 23:37:44 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.024 23:37:44 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:56.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.024 23:37:44 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.024 23:37:44 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:56.024 23:37:44 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:56.024 [2024-09-28 23:37:44.130067] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:56.024 [2024-09-28 23:37:44.130581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74280 ] 00:16:56.282 [2024-09-28 23:37:44.282067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.540 [2024-09-28 23:37:44.459833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.107 23:37:45 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:57.107 23:37:45 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:57.107 23:37:45 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:57.107 [2024-09-28 23:37:45.182482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:57.107 [2024-09-28 23:37:45.182552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:57.367 [2024-09-28 23:37:45.348772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.367 [2024-09-28 23:37:45.348823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:57.367 [2024-09-28 23:37:45.348837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:57.367 [2024-09-28 23:37:45.348847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.367 [2024-09-28 23:37:45.351437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.367 [2024-09-28 23:37:45.351476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:57.367 [2024-09-28 23:37:45.351489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.570 ms 00:16:57.367 [2024-09-28 23:37:45.351496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.367 [2024-09-28 23:37:45.351584] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:57.367 [2024-09-28 23:37:45.352241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:57.367 [2024-09-28 23:37:45.352269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.367 [2024-09-28 23:37:45.352277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:57.367 [2024-09-28 23:37:45.352287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:16:57.367 [2024-09-28 23:37:45.352294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.367 [2024-09-28 23:37:45.353704] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:57.368 [2024-09-28 23:37:45.366053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.368 [2024-09-28 23:37:45.366094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:57.368 [2024-09-28 23:37:45.366106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.353 ms 00:16:57.368 [2024-09-28 23:37:45.366116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.368 [2024-09-28 23:37:45.366190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.368 [2024-09-28 23:37:45.366204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:57.368 [2024-09-28 23:37:45.366213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:57.368 [2024-09-28 23:37:45.366221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.368 [2024-09-28 23:37:45.370706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.368 [2024-09-28 23:37:45.370743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:57.368 [2024-09-28 23:37:45.370753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.438 ms 00:16:57.368 [2024-09-28 23:37:45.370762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.368 [2024-09-28 23:37:45.370862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.368 [2024-09-28 23:37:45.370874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:57.368 [2024-09-28 23:37:45.370882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:57.368 [2024-09-28 23:37:45.370891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.368 [2024-09-28 23:37:45.370915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.368 [2024-09-28 23:37:45.370927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:57.368 [2024-09-28 23:37:45.370934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:57.368 [2024-09-28 23:37:45.370943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.368 [2024-09-28 23:37:45.370966] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:57.368 [2024-09-28 23:37:45.374163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.368 [2024-09-28 23:37:45.374190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:57.368 [2024-09-28 23:37:45.374201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:16:57.368 [2024-09-28 23:37:45.374211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.368 [2024-09-28 23:37:45.374246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.368 [2024-09-28 23:37:45.374254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:57.368 [2024-09-28 23:37:45.374264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:57.368 [2024-09-28 23:37:45.374271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.368 [2024-09-28 23:37:45.374299] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:57.368 [2024-09-28 23:37:45.374315] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:57.368 [2024-09-28 23:37:45.374353] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:57.368 [2024-09-28 23:37:45.374370] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:57.368 [2024-09-28 23:37:45.374475] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:57.368 [2024-09-28 23:37:45.374494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:57.368 [2024-09-28 23:37:45.374517] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:57.368 [2024-09-28 23:37:45.374527] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:57.368 [2024-09-28 23:37:45.374538] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:57.368 [2024-09-28 23:37:45.374546] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:57.368 [2024-09-28 23:37:45.374555] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:57.368 [2024-09-28 23:37:45.374562] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:57.368 [2024-09-28 23:37:45.374572] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:57.368 [2024-09-28 23:37:45.374582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.368 [2024-09-28 23:37:45.374590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:57.368 [2024-09-28 23:37:45.374598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:16:57.368 [2024-09-28 23:37:45.374606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.368 [2024-09-28 23:37:45.374692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.368 [2024-09-28 23:37:45.374708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:57.368 [2024-09-28 23:37:45.374716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:57.368 [2024-09-28 23:37:45.374724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.368 [2024-09-28 23:37:45.374824] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:57.368 [2024-09-28 23:37:45.374837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:57.368 [2024-09-28 23:37:45.374845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:57.368 [2024-09-28 23:37:45.374854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.368 [2024-09-28 23:37:45.374861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:57.368 [2024-09-28 23:37:45.374873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:57.368 [2024-09-28 23:37:45.374879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:57.368 [2024-09-28 23:37:45.374891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:57.368 [2024-09-28 23:37:45.374897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:57.368 [2024-09-28 23:37:45.374905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:57.368 [2024-09-28 23:37:45.374912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:57.368 [2024-09-28 23:37:45.374920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:57.368 [2024-09-28 23:37:45.374926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:57.368 [2024-09-28 23:37:45.374935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:57.368 [2024-09-28 23:37:45.374941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:57.368 [2024-09-28 23:37:45.374949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.368 [2024-09-28 23:37:45.374955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:57.368 [2024-09-28 23:37:45.374963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:57.368 [2024-09-28 23:37:45.374977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.368 [2024-09-28 23:37:45.374985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:57.368 [2024-09-28 23:37:45.374992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:57.368 [2024-09-28 23:37:45.374999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.368 [2024-09-28 23:37:45.375006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:57.368 [2024-09-28 23:37:45.375015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:57.368 [2024-09-28 23:37:45.375021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.368 [2024-09-28 23:37:45.375029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:57.368 [2024-09-28 23:37:45.375036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:57.368 [2024-09-28 23:37:45.375044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.368 [2024-09-28 23:37:45.375050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:57.368 [2024-09-28 23:37:45.375058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:57.368 [2024-09-28 23:37:45.375064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:57.368 [2024-09-28 23:37:45.375073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:57.368 [2024-09-28 23:37:45.375079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:57.368 [2024-09-28 23:37:45.375086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:57.368 [2024-09-28 23:37:45.375093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:57.368 [2024-09-28 23:37:45.375101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:57.368 [2024-09-28 23:37:45.375107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:57.368 [2024-09-28 23:37:45.375115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:57.368 [2024-09-28 23:37:45.375121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:57.368 [2024-09-28 23:37:45.375131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.368 [2024-09-28 23:37:45.375138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:57.368 [2024-09-28 23:37:45.375146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:57.368 [2024-09-28 23:37:45.375152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.368 [2024-09-28 23:37:45.375159] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:57.368 [2024-09-28 23:37:45.375167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:57.368 [2024-09-28 23:37:45.375175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:57.368 [2024-09-28 23:37:45.375182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:57.368 [2024-09-28 23:37:45.375190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:57.368 [2024-09-28 23:37:45.375197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:57.368 [2024-09-28 23:37:45.375204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:57.368 [2024-09-28 23:37:45.375211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:57.368 [2024-09-28 23:37:45.375220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:57.368 [2024-09-28 23:37:45.375226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:57.368 [2024-09-28 23:37:45.375236] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:57.368 [2024-09-28 23:37:45.375245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:57.369 [2024-09-28 23:37:45.375256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:57.369 [2024-09-28 23:37:45.375263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:57.369 [2024-09-28 23:37:45.375273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:57.369 [2024-09-28 23:37:45.375280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:57.369 [2024-09-28 23:37:45.375289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:57.369 [2024-09-28 23:37:45.375296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:57.369 [2024-09-28 23:37:45.375304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:57.369 [2024-09-28 23:37:45.375311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:57.369 [2024-09-28 23:37:45.375319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:57.369 [2024-09-28 23:37:45.375326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:57.369 [2024-09-28 23:37:45.375335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:57.369 [2024-09-28 23:37:45.375342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:57.369 [2024-09-28 23:37:45.375350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:57.369 [2024-09-28 23:37:45.375357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:57.369 [2024-09-28 23:37:45.375365] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:57.369 [2024-09-28 23:37:45.375373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:57.369 [2024-09-28 23:37:45.375386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:57.369 [2024-09-28 23:37:45.375393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:57.369 [2024-09-28 23:37:45.375401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:57.369 [2024-09-28 23:37:45.375408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:57.369 [2024-09-28 23:37:45.375417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.375424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:57.369 [2024-09-28 23:37:45.375432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:16:57.369 [2024-09-28 23:37:45.375439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.400672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.400706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:57.369 [2024-09-28 23:37:45.400718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.142 ms 00:16:57.369 [2024-09-28 23:37:45.400725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.400843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.400852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:57.369 [2024-09-28 23:37:45.400862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:57.369 [2024-09-28 23:37:45.400869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.438025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.438077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:57.369 [2024-09-28 23:37:45.438092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.132 ms 00:16:57.369 [2024-09-28 23:37:45.438101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.438179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.438189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:57.369 [2024-09-28 23:37:45.438200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:57.369 [2024-09-28 23:37:45.438210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.438568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.438590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:57.369 [2024-09-28 23:37:45.438601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:16:57.369 [2024-09-28 23:37:45.438609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.438758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.438776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:57.369 [2024-09-28 23:37:45.438788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:16:57.369 [2024-09-28 23:37:45.438798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.454255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.454294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:57.369 [2024-09-28 23:37:45.454305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.427 ms 00:16:57.369 [2024-09-28 23:37:45.454314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.466445] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:57.369 [2024-09-28 23:37:45.466480] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:57.369 [2024-09-28 23:37:45.466493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.466501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:57.369 [2024-09-28 23:37:45.466521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.064 ms 00:16:57.369 [2024-09-28 23:37:45.466529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.490689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.490725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:57.369 [2024-09-28 23:37:45.490737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.091 ms 00:16:57.369 [2024-09-28 23:37:45.490749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.502354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.502384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:57.369 [2024-09-28 23:37:45.502398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.536 ms 00:16:57.369 [2024-09-28 23:37:45.502405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.513559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.513589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:57.369 [2024-09-28 23:37:45.513601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.093 ms 00:16:57.369 [2024-09-28 23:37:45.513608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.369 [2024-09-28 23:37:45.514211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.369 [2024-09-28 23:37:45.514236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:57.369 [2024-09-28 23:37:45.514247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:16:57.369 [2024-09-28 23:37:45.514256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-09-28 23:37:45.567773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.628 [2024-09-28 23:37:45.567824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:57.628 [2024-09-28 23:37:45.567840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.485 ms 00:16:57.628 [2024-09-28 23:37:45.567850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-09-28 23:37:45.578209] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:57.628 [2024-09-28 23:37:45.591651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.628 [2024-09-28 23:37:45.591694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:57.628 [2024-09-28 23:37:45.591705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.695 ms 00:16:57.628 [2024-09-28 23:37:45.591715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-09-28 23:37:45.591795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.628 [2024-09-28 23:37:45.591807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:57.628 [2024-09-28 23:37:45.591815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:57.628 [2024-09-28 23:37:45.591825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-09-28 23:37:45.591872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.628 [2024-09-28 23:37:45.591882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:57.628 [2024-09-28 23:37:45.591889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:57.628 [2024-09-28 23:37:45.591898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-09-28 23:37:45.591922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.628 [2024-09-28 23:37:45.591931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:57.628 [2024-09-28 23:37:45.591941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:57.628 [2024-09-28 23:37:45.591952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-09-28 23:37:45.591982] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:57.628 [2024-09-28 23:37:45.591996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.628 [2024-09-28 23:37:45.592003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:57.628 [2024-09-28 23:37:45.592012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:57.628 [2024-09-28 23:37:45.592019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-09-28 23:37:45.614946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.628 [2024-09-28 23:37:45.614980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:57.628 [2024-09-28 23:37:45.614994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.902 ms 00:16:57.628 [2024-09-28 23:37:45.615004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-09-28 23:37:45.615091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.628 [2024-09-28 23:37:45.615102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:57.628 [2024-09-28 23:37:45.615112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:57.628 [2024-09-28 23:37:45.615119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.628 [2024-09-28 23:37:45.615863] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:57.628 [2024-09-28 23:37:45.618737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 266.816 ms, result 0 00:16:57.628 [2024-09-28 23:37:45.619843] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:57.628 Some configs were skipped because the RPC state that can call them passed over. 00:16:57.628 23:37:45 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:57.887 [2024-09-28 23:37:45.834388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.887 [2024-09-28 23:37:45.834445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:57.887 [2024-09-28 23:37:45.834457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:16:57.887 [2024-09-28 23:37:45.834467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.887 [2024-09-28 23:37:45.834499] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.798 ms, result 0 00:16:57.887 true 00:16:57.887 23:37:45 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:57.887 [2024-09-28 23:37:45.994101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.887 [2024-09-28 23:37:45.994145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:57.887 [2024-09-28 23:37:45.994159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:16:57.887 [2024-09-28 23:37:45.994166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.887 [2024-09-28 23:37:45.994202] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.345 ms, result 0 00:16:57.887 true 00:16:57.887 23:37:46 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74280 00:16:57.887 23:37:46 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74280 ']' 00:16:57.887 23:37:46 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74280 00:16:57.887 23:37:46 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:57.887 23:37:46 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.887 23:37:46 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74280 00:16:57.887 23:37:46 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.887 23:37:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.887 23:37:46 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74280' 00:16:57.887 killing process with pid 74280 00:16:57.887 23:37:46 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74280 00:16:57.887 23:37:46 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74280 00:16:58.824 [2024-09-28 23:37:46.693796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.693839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:58.824 [2024-09-28 23:37:46.693849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:58.824 [2024-09-28 23:37:46.693857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.693876] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:58.824 [2024-09-28 23:37:46.696003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.696029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:58.824 [2024-09-28 23:37:46.696041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:16:58.824 [2024-09-28 23:37:46.696048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.696289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.696302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:58.824 [2024-09-28 23:37:46.696312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:16:58.824 [2024-09-28 23:37:46.696319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.699565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.699590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:58.824 [2024-09-28 23:37:46.699599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:16:58.824 [2024-09-28 23:37:46.699605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.704978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.705003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:58.824 [2024-09-28 23:37:46.705013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.344 ms 00:16:58.824 [2024-09-28 23:37:46.705021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.712324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.712352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:58.824 [2024-09-28 23:37:46.712362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.246 ms 00:16:58.824 [2024-09-28 23:37:46.712368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.718941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.718969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:58.824 [2024-09-28 23:37:46.718979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.542 ms 00:16:58.824 [2024-09-28 23:37:46.718991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.719100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.719108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:58.824 [2024-09-28 23:37:46.719116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:58.824 [2024-09-28 23:37:46.719124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.727341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.727369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:58.824 [2024-09-28 23:37:46.727378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.198 ms 00:16:58.824 [2024-09-28 23:37:46.727384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.734559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.734586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:58.824 [2024-09-28 23:37:46.734597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.145 ms 00:16:58.824 [2024-09-28 23:37:46.734603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.741926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.741954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:58.824 [2024-09-28 23:37:46.741962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.286 ms 00:16:58.824 [2024-09-28 23:37:46.741967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.748779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.824 [2024-09-28 23:37:46.748805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:58.824 [2024-09-28 23:37:46.748814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.760 ms 00:16:58.824 [2024-09-28 23:37:46.748819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.824 [2024-09-28 23:37:46.748846] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:58.824 [2024-09-28 23:37:46.748858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:58.824 [2024-09-28 23:37:46.748983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.748989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.748998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:58.825 [2024-09-28 23:37:46.749526] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:58.825 [2024-09-28 23:37:46.749535] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 654be1da-fb74-4145-8224-07c374972a2a 00:16:58.825 [2024-09-28 23:37:46.749541] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:58.825 [2024-09-28 23:37:46.749548] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:58.825 [2024-09-28 23:37:46.749555] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:58.825 [2024-09-28 23:37:46.749562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:58.825 [2024-09-28 23:37:46.749572] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:58.825 [2024-09-28 23:37:46.749579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:58.825 [2024-09-28 23:37:46.749587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:58.825 [2024-09-28 23:37:46.749593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:58.826 [2024-09-28 23:37:46.749598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:58.826 [2024-09-28 23:37:46.749604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.826 [2024-09-28 23:37:46.749610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:58.826 [2024-09-28 23:37:46.749618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:16:58.826 [2024-09-28 23:37:46.749624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.759492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.826 [2024-09-28 23:37:46.759525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:58.826 [2024-09-28 23:37:46.759536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.851 ms 00:16:58.826 [2024-09-28 23:37:46.759543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.759835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.826 [2024-09-28 23:37:46.759852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:58.826 [2024-09-28 23:37:46.759860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:16:58.826 [2024-09-28 23:37:46.759866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.790821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.790849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.826 [2024-09-28 23:37:46.790859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.790867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.790945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.790953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.826 [2024-09-28 23:37:46.790960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.790966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.791000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.791007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.826 [2024-09-28 23:37:46.791017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.791023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.791040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.791045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.826 [2024-09-28 23:37:46.791052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.791058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.850684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.850722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.826 [2024-09-28 23:37:46.850733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.850742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.899378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.899421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.826 [2024-09-28 23:37:46.899431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.899437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.899505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.899521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.826 [2024-09-28 23:37:46.899531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.899536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.899560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.899569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.826 [2024-09-28 23:37:46.899576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.899582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.899651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.899658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.826 [2024-09-28 23:37:46.899665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.899671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.899695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.899702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:58.826 [2024-09-28 23:37:46.899710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.899716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.899746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.899752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.826 [2024-09-28 23:37:46.899760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.899765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.899799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.826 [2024-09-28 23:37:46.899807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.826 [2024-09-28 23:37:46.899814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.826 [2024-09-28 23:37:46.899819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.826 [2024-09-28 23:37:46.899924] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 206.113 ms, result 0 00:16:59.392 23:37:47 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:59.392 23:37:47 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:59.651 [2024-09-28 23:37:47.566516] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:59.651 [2024-09-28 23:37:47.566654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74327 ] 00:16:59.651 [2024-09-28 23:37:47.711038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.909 [2024-09-28 23:37:47.856296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.909 [2024-09-28 23:37:48.061287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:59.909 [2024-09-28 23:37:48.061335] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:00.169 [2024-09-28 23:37:48.215063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.215114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:00.169 [2024-09-28 23:37:48.215130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:00.169 [2024-09-28 23:37:48.215137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.217768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.217798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.169 [2024-09-28 23:37:48.217807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:17:00.169 [2024-09-28 23:37:48.217817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.217940] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:00.169 [2024-09-28 23:37:48.218649] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:00.169 [2024-09-28 23:37:48.218672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.218682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.169 [2024-09-28 23:37:48.218691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:17:00.169 [2024-09-28 23:37:48.218698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.219783] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:00.169 [2024-09-28 23:37:48.231954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.231983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:00.169 [2024-09-28 23:37:48.231994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.172 ms 00:17:00.169 [2024-09-28 23:37:48.232002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.232084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.232095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:00.169 [2024-09-28 23:37:48.232105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:00.169 [2024-09-28 23:37:48.232113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.236739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.236763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.169 [2024-09-28 23:37:48.236773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.587 ms 00:17:00.169 [2024-09-28 23:37:48.236780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.236863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.236874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.169 [2024-09-28 23:37:48.236882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:00.169 [2024-09-28 23:37:48.236889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.236913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.236921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:00.169 [2024-09-28 23:37:48.236929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:00.169 [2024-09-28 23:37:48.236936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.236956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:00.169 [2024-09-28 23:37:48.240149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.240172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.169 [2024-09-28 23:37:48.240180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:17:00.169 [2024-09-28 23:37:48.240188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.240221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.240232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:00.169 [2024-09-28 23:37:48.240240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:00.169 [2024-09-28 23:37:48.240247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.240264] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:00.169 [2024-09-28 23:37:48.240280] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:00.169 [2024-09-28 23:37:48.240312] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:00.169 [2024-09-28 23:37:48.240326] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:00.169 [2024-09-28 23:37:48.240430] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:00.169 [2024-09-28 23:37:48.240445] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:00.169 [2024-09-28 23:37:48.240456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:00.169 [2024-09-28 23:37:48.240465] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:00.169 [2024-09-28 23:37:48.240474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:00.169 [2024-09-28 23:37:48.240482] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:00.169 [2024-09-28 23:37:48.240489] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:00.169 [2024-09-28 23:37:48.240496] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:00.169 [2024-09-28 23:37:48.240504] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:00.169 [2024-09-28 23:37:48.240522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.240532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:00.169 [2024-09-28 23:37:48.240540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:00.169 [2024-09-28 23:37:48.240547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.240634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.169 [2024-09-28 23:37:48.240642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:00.169 [2024-09-28 23:37:48.240650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:00.169 [2024-09-28 23:37:48.240656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.169 [2024-09-28 23:37:48.240754] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:00.169 [2024-09-28 23:37:48.240763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:00.169 [2024-09-28 23:37:48.240774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:00.169 [2024-09-28 23:37:48.240781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.169 [2024-09-28 23:37:48.240788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:00.169 [2024-09-28 23:37:48.240795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:00.169 [2024-09-28 23:37:48.240802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:00.169 [2024-09-28 23:37:48.240809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:00.169 [2024-09-28 23:37:48.240815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:00.169 [2024-09-28 23:37:48.240822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:00.169 [2024-09-28 23:37:48.240828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:00.169 [2024-09-28 23:37:48.240840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:00.169 [2024-09-28 23:37:48.240848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:00.169 [2024-09-28 23:37:48.240855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:00.169 [2024-09-28 23:37:48.240862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:00.169 [2024-09-28 23:37:48.240868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.169 [2024-09-28 23:37:48.240875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:00.169 [2024-09-28 23:37:48.240881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:00.169 [2024-09-28 23:37:48.240888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.169 [2024-09-28 23:37:48.240896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:00.169 [2024-09-28 23:37:48.240902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:00.169 [2024-09-28 23:37:48.240909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.169 [2024-09-28 23:37:48.240915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:00.169 [2024-09-28 23:37:48.240921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:00.169 [2024-09-28 23:37:48.240927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.169 [2024-09-28 23:37:48.240934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:00.169 [2024-09-28 23:37:48.240940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:00.169 [2024-09-28 23:37:48.240946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.169 [2024-09-28 23:37:48.240953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:00.169 [2024-09-28 23:37:48.240960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:00.169 [2024-09-28 23:37:48.240966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.169 [2024-09-28 23:37:48.240972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:00.169 [2024-09-28 23:37:48.240978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:00.169 [2024-09-28 23:37:48.240984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:00.169 [2024-09-28 23:37:48.240990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:00.169 [2024-09-28 23:37:48.240997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:00.169 [2024-09-28 23:37:48.241003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:00.170 [2024-09-28 23:37:48.241010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:00.170 [2024-09-28 23:37:48.241016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:00.170 [2024-09-28 23:37:48.241022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.170 [2024-09-28 23:37:48.241028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:00.170 [2024-09-28 23:37:48.241034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:00.170 [2024-09-28 23:37:48.241040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.170 [2024-09-28 23:37:48.241047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:00.170 [2024-09-28 23:37:48.241054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:00.170 [2024-09-28 23:37:48.241062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:00.170 [2024-09-28 23:37:48.241069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.170 [2024-09-28 23:37:48.241076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:00.170 [2024-09-28 23:37:48.241083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:00.170 [2024-09-28 23:37:48.241089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:00.170 [2024-09-28 23:37:48.241096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:00.170 [2024-09-28 23:37:48.241102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:00.170 [2024-09-28 23:37:48.241109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:00.170 [2024-09-28 23:37:48.241116] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:00.170 [2024-09-28 23:37:48.241128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:00.170 [2024-09-28 23:37:48.241136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:00.170 [2024-09-28 23:37:48.241143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:00.170 [2024-09-28 23:37:48.241151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:00.170 [2024-09-28 23:37:48.241157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:00.170 [2024-09-28 23:37:48.241164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:00.170 [2024-09-28 23:37:48.241171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:00.170 [2024-09-28 23:37:48.241178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:00.170 [2024-09-28 23:37:48.241184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:00.170 [2024-09-28 23:37:48.241191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:00.170 [2024-09-28 23:37:48.241198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:00.170 [2024-09-28 23:37:48.241205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:00.170 [2024-09-28 23:37:48.241212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:00.170 [2024-09-28 23:37:48.241218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:00.170 [2024-09-28 23:37:48.241225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:00.170 [2024-09-28 23:37:48.241232] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:00.170 [2024-09-28 23:37:48.241239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:00.170 [2024-09-28 23:37:48.241247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:00.170 [2024-09-28 23:37:48.241254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:00.170 [2024-09-28 23:37:48.241261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:00.170 [2024-09-28 23:37:48.241268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:00.170 [2024-09-28 23:37:48.241275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.170 [2024-09-28 23:37:48.241284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:00.170 [2024-09-28 23:37:48.241293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:17:00.170 [2024-09-28 23:37:48.241300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.170 [2024-09-28 23:37:48.285286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.170 [2024-09-28 23:37:48.285323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.170 [2024-09-28 23:37:48.285335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.922 ms 00:17:00.170 [2024-09-28 23:37:48.285344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.170 [2024-09-28 23:37:48.285476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.170 [2024-09-28 23:37:48.285488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:00.170 [2024-09-28 23:37:48.285496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:00.170 [2024-09-28 23:37:48.285503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.170 [2024-09-28 23:37:48.315603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.170 [2024-09-28 23:37:48.315633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.170 [2024-09-28 23:37:48.315643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.066 ms 00:17:00.170 [2024-09-28 23:37:48.315651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.170 [2024-09-28 23:37:48.315728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.170 [2024-09-28 23:37:48.315738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.170 [2024-09-28 23:37:48.315747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:00.170 [2024-09-28 23:37:48.315754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.170 [2024-09-28 23:37:48.316059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.170 [2024-09-28 23:37:48.316080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.170 [2024-09-28 23:37:48.316088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:00.170 [2024-09-28 23:37:48.316095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.170 [2024-09-28 23:37:48.316218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.170 [2024-09-28 23:37:48.316227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.170 [2024-09-28 23:37:48.316235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:00.170 [2024-09-28 23:37:48.316242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.170 [2024-09-28 23:37:48.328763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.170 [2024-09-28 23:37:48.328792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.170 [2024-09-28 23:37:48.328802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.501 ms 00:17:00.170 [2024-09-28 23:37:48.328809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.341029] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:00.429 [2024-09-28 23:37:48.341063] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:00.429 [2024-09-28 23:37:48.341073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.341081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:00.429 [2024-09-28 23:37:48.341089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.170 ms 00:17:00.429 [2024-09-28 23:37:48.341096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.365520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.365565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:00.429 [2024-09-28 23:37:48.365580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.357 ms 00:17:00.429 [2024-09-28 23:37:48.365587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.376841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.376868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:00.429 [2024-09-28 23:37:48.376877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.185 ms 00:17:00.429 [2024-09-28 23:37:48.376884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.388098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.388125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:00.429 [2024-09-28 23:37:48.388134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.154 ms 00:17:00.429 [2024-09-28 23:37:48.388142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.388753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.388771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:00.429 [2024-09-28 23:37:48.388780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:17:00.429 [2024-09-28 23:37:48.388787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.443412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.443458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:00.429 [2024-09-28 23:37:48.443470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.602 ms 00:17:00.429 [2024-09-28 23:37:48.443478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.454040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:00.429 [2024-09-28 23:37:48.467715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.467748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:00.429 [2024-09-28 23:37:48.467759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.114 ms 00:17:00.429 [2024-09-28 23:37:48.467767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.467848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.467859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:00.429 [2024-09-28 23:37:48.467867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:00.429 [2024-09-28 23:37:48.467875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.467924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.467935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:00.429 [2024-09-28 23:37:48.467943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:00.429 [2024-09-28 23:37:48.467950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.467970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.467978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:00.429 [2024-09-28 23:37:48.467985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:00.429 [2024-09-28 23:37:48.467993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.468025] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:00.429 [2024-09-28 23:37:48.468034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.468043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:00.429 [2024-09-28 23:37:48.468051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:00.429 [2024-09-28 23:37:48.468058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.490980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.491011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:00.429 [2024-09-28 23:37:48.491023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.903 ms 00:17:00.429 [2024-09-28 23:37:48.491030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.491119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.429 [2024-09-28 23:37:48.491129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:00.429 [2024-09-28 23:37:48.491137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:00.429 [2024-09-28 23:37:48.491144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.429 [2024-09-28 23:37:48.492199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:00.429 [2024-09-28 23:37:48.495216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.848 ms, result 0 00:17:00.429 [2024-09-28 23:37:48.495890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:00.429 [2024-09-28 23:37:48.508663] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:06.460  Copying: 43/256 [MB] (43 MBps) Copying: 87/256 [MB] (43 MBps) Copying: 128/256 [MB] (40 MBps) Copying: 170/256 [MB] (41 MBps) Copying: 213/256 [MB] (43 MBps) Copying: 256/256 [MB] (average 42 MBps)[2024-09-28 23:37:54.499876] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.460 [2024-09-28 23:37:54.508913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.460 [2024-09-28 23:37:54.508950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:06.460 [2024-09-28 23:37:54.508963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:06.460 [2024-09-28 23:37:54.508972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.460 [2024-09-28 23:37:54.508993] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:06.460 [2024-09-28 23:37:54.511639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.460 [2024-09-28 23:37:54.511665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:06.460 [2024-09-28 23:37:54.511675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:17:06.460 [2024-09-28 23:37:54.511684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.460 [2024-09-28 23:37:54.511938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.460 [2024-09-28 23:37:54.511955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:06.460 [2024-09-28 23:37:54.511963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:17:06.460 [2024-09-28 23:37:54.511971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.460 [2024-09-28 23:37:54.515660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.460 [2024-09-28 23:37:54.515677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:06.460 [2024-09-28 23:37:54.515686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:17:06.460 [2024-09-28 23:37:54.515694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.460 [2024-09-28 23:37:54.522648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.460 [2024-09-28 23:37:54.522670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:06.460 [2024-09-28 23:37:54.522683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.936 ms 00:17:06.460 [2024-09-28 23:37:54.522690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.460 [2024-09-28 23:37:54.545799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.460 [2024-09-28 23:37:54.545830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:06.460 [2024-09-28 23:37:54.545842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.052 ms 00:17:06.460 [2024-09-28 23:37:54.545849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.460 [2024-09-28 23:37:54.559775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.460 [2024-09-28 23:37:54.559805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:06.460 [2024-09-28 23:37:54.559816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.887 ms 00:17:06.460 [2024-09-28 23:37:54.559823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.460 [2024-09-28 23:37:54.559955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.460 [2024-09-28 23:37:54.559965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:06.460 [2024-09-28 23:37:54.559974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:06.460 [2024-09-28 23:37:54.559981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.460 [2024-09-28 23:37:54.582914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.460 [2024-09-28 23:37:54.582943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:06.460 [2024-09-28 23:37:54.582954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.912 ms 00:17:06.460 [2024-09-28 23:37:54.582961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.460 [2024-09-28 23:37:54.605020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.460 [2024-09-28 23:37:54.605047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:06.460 [2024-09-28 23:37:54.605057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.038 ms 00:17:06.460 [2024-09-28 23:37:54.605064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.720 [2024-09-28 23:37:54.627013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.720 [2024-09-28 23:37:54.627040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:06.720 [2024-09-28 23:37:54.627049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.929 ms 00:17:06.720 [2024-09-28 23:37:54.627056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.720 [2024-09-28 23:37:54.649067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.720 [2024-09-28 23:37:54.649094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:06.720 [2024-09-28 23:37:54.649104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.965 ms 00:17:06.720 [2024-09-28 23:37:54.649112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.720 [2024-09-28 23:37:54.649148] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:06.720 [2024-09-28 23:37:54.649162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:06.720 [2024-09-28 23:37:54.649651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:06.721 [2024-09-28 23:37:54.649931] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:06.721 [2024-09-28 23:37:54.649938] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 654be1da-fb74-4145-8224-07c374972a2a 00:17:06.721 [2024-09-28 23:37:54.649947] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:06.721 [2024-09-28 23:37:54.649954] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:06.721 [2024-09-28 23:37:54.649961] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:06.721 [2024-09-28 23:37:54.649971] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:06.721 [2024-09-28 23:37:54.649978] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:06.721 [2024-09-28 23:37:54.649985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:06.721 [2024-09-28 23:37:54.649992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:06.721 [2024-09-28 23:37:54.649998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:06.721 [2024-09-28 23:37:54.650003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:06.721 [2024-09-28 23:37:54.650010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.721 [2024-09-28 23:37:54.650018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:06.721 [2024-09-28 23:37:54.650025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:17:06.721 [2024-09-28 23:37:54.650032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.662144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.721 [2024-09-28 23:37:54.662174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:06.721 [2024-09-28 23:37:54.662184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.094 ms 00:17:06.721 [2024-09-28 23:37:54.662192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.662562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.721 [2024-09-28 23:37:54.662572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:06.721 [2024-09-28 23:37:54.662580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:17:06.721 [2024-09-28 23:37:54.662587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.692440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.692473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.721 [2024-09-28 23:37:54.692483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.692490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.692578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.692587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.721 [2024-09-28 23:37:54.692595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.692603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.692642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.692654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.721 [2024-09-28 23:37:54.692662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.692669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.692685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.692693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.721 [2024-09-28 23:37:54.692700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.692707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.767861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.767904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.721 [2024-09-28 23:37:54.767914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.767922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.829625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.829668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.721 [2024-09-28 23:37:54.829678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.829685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.829737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.829746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.721 [2024-09-28 23:37:54.829758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.829765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.829792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.829800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.721 [2024-09-28 23:37:54.829808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.829815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.829900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.829908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.721 [2024-09-28 23:37:54.829916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.829925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.829954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.829962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:06.721 [2024-09-28 23:37:54.829969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.829976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.830010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.830018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.721 [2024-09-28 23:37:54.830026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.721 [2024-09-28 23:37:54.830035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.721 [2024-09-28 23:37:54.830074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.721 [2024-09-28 23:37:54.830083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.721 [2024-09-28 23:37:54.830091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.722 [2024-09-28 23:37:54.830098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.722 [2024-09-28 23:37:54.830230] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 321.305 ms, result 0 00:17:07.655 00:17:07.655 00:17:07.655 23:37:55 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:07.655 23:37:55 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:08.220 23:37:56 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:08.220 [2024-09-28 23:37:56.159407] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:08.220 [2024-09-28 23:37:56.159505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74427 ] 00:17:08.220 [2024-09-28 23:37:56.298095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.478 [2024-09-28 23:37:56.442873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.737 [2024-09-28 23:37:56.649397] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.737 [2024-09-28 23:37:56.649451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.737 [2024-09-28 23:37:56.802260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.737 [2024-09-28 23:37:56.802310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:08.737 [2024-09-28 23:37:56.802326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:08.737 [2024-09-28 23:37:56.802334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.737 [2024-09-28 23:37:56.804953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.737 [2024-09-28 23:37:56.804989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.737 [2024-09-28 23:37:56.804999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:17:08.737 [2024-09-28 23:37:56.805010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.737 [2024-09-28 23:37:56.805135] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:08.737 [2024-09-28 23:37:56.805878] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:08.737 [2024-09-28 23:37:56.805906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.737 [2024-09-28 23:37:56.805916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.737 [2024-09-28 23:37:56.805925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:17:08.737 [2024-09-28 23:37:56.805932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.737 [2024-09-28 23:37:56.807277] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:08.737 [2024-09-28 23:37:56.819455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.738 [2024-09-28 23:37:56.819492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:08.738 [2024-09-28 23:37:56.819504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.180 ms 00:17:08.738 [2024-09-28 23:37:56.819521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.738 [2024-09-28 23:37:56.819894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.738 [2024-09-28 23:37:56.819935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:08.738 [2024-09-28 23:37:56.819950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:08.738 [2024-09-28 23:37:56.819958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.738 [2024-09-28 23:37:56.824731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.738 [2024-09-28 23:37:56.824766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.738 [2024-09-28 23:37:56.824777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.727 ms 00:17:08.738 [2024-09-28 23:37:56.824784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.738 [2024-09-28 23:37:56.824887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.738 [2024-09-28 23:37:56.824899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.738 [2024-09-28 23:37:56.824908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:08.738 [2024-09-28 23:37:56.824915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.738 [2024-09-28 23:37:56.824941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.738 [2024-09-28 23:37:56.824950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:08.738 [2024-09-28 23:37:56.824958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:08.738 [2024-09-28 23:37:56.824964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.738 [2024-09-28 23:37:56.824986] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:08.738 [2024-09-28 23:37:56.828498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.738 [2024-09-28 23:37:56.828540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.738 [2024-09-28 23:37:56.828549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.518 ms 00:17:08.738 [2024-09-28 23:37:56.828557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.738 [2024-09-28 23:37:56.828603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.738 [2024-09-28 23:37:56.828614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:08.738 [2024-09-28 23:37:56.828622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:08.738 [2024-09-28 23:37:56.828629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.738 [2024-09-28 23:37:56.828647] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:08.738 [2024-09-28 23:37:56.828664] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:08.738 [2024-09-28 23:37:56.828698] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:08.738 [2024-09-28 23:37:56.828713] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:08.738 [2024-09-28 23:37:56.828818] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:08.738 [2024-09-28 23:37:56.828835] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:08.738 [2024-09-28 23:37:56.828846] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:08.738 [2024-09-28 23:37:56.828855] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:08.738 [2024-09-28 23:37:56.828864] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:08.738 [2024-09-28 23:37:56.828872] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:08.738 [2024-09-28 23:37:56.828880] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:08.738 [2024-09-28 23:37:56.828887] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:08.738 [2024-09-28 23:37:56.828894] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:08.738 [2024-09-28 23:37:56.828901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.738 [2024-09-28 23:37:56.828911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:08.738 [2024-09-28 23:37:56.828919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:08.738 [2024-09-28 23:37:56.828927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.738 [2024-09-28 23:37:56.829026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.738 [2024-09-28 23:37:56.829042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:08.738 [2024-09-28 23:37:56.829050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:08.738 [2024-09-28 23:37:56.829057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.738 [2024-09-28 23:37:56.829158] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:08.738 [2024-09-28 23:37:56.829173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:08.738 [2024-09-28 23:37:56.829184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.738 [2024-09-28 23:37:56.829192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:08.738 [2024-09-28 23:37:56.829206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:08.738 [2024-09-28 23:37:56.829219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:08.738 [2024-09-28 23:37:56.829226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.738 [2024-09-28 23:37:56.829239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:08.738 [2024-09-28 23:37:56.829251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:08.738 [2024-09-28 23:37:56.829258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.738 [2024-09-28 23:37:56.829264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:08.738 [2024-09-28 23:37:56.829271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:08.738 [2024-09-28 23:37:56.829278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:08.738 [2024-09-28 23:37:56.829291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:08.738 [2024-09-28 23:37:56.829297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:08.738 [2024-09-28 23:37:56.829311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.738 [2024-09-28 23:37:56.829324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:08.738 [2024-09-28 23:37:56.829330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.738 [2024-09-28 23:37:56.829342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:08.738 [2024-09-28 23:37:56.829349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.738 [2024-09-28 23:37:56.829361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:08.738 [2024-09-28 23:37:56.829368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:08.738 [2024-09-28 23:37:56.829381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:08.738 [2024-09-28 23:37:56.829387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.738 [2024-09-28 23:37:56.829399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:08.738 [2024-09-28 23:37:56.829405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:08.738 [2024-09-28 23:37:56.829412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.738 [2024-09-28 23:37:56.829418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:08.738 [2024-09-28 23:37:56.829424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:08.738 [2024-09-28 23:37:56.829431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:08.738 [2024-09-28 23:37:56.829443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:08.738 [2024-09-28 23:37:56.829449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829456] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:08.738 [2024-09-28 23:37:56.829463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:08.738 [2024-09-28 23:37:56.829470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.738 [2024-09-28 23:37:56.829477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.738 [2024-09-28 23:37:56.829484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:08.738 [2024-09-28 23:37:56.829490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:08.738 [2024-09-28 23:37:56.829497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:08.738 [2024-09-28 23:37:56.829503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:08.738 [2024-09-28 23:37:56.829523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:08.738 [2024-09-28 23:37:56.829534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:08.738 [2024-09-28 23:37:56.829548] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:08.738 [2024-09-28 23:37:56.829561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.738 [2024-09-28 23:37:56.829569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:08.739 [2024-09-28 23:37:56.829577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:08.739 [2024-09-28 23:37:56.829584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:08.739 [2024-09-28 23:37:56.829591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:08.739 [2024-09-28 23:37:56.829598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:08.739 [2024-09-28 23:37:56.829605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:08.739 [2024-09-28 23:37:56.829612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:08.739 [2024-09-28 23:37:56.829620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:08.739 [2024-09-28 23:37:56.829627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:08.739 [2024-09-28 23:37:56.829634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:08.739 [2024-09-28 23:37:56.829641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:08.739 [2024-09-28 23:37:56.829648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:08.739 [2024-09-28 23:37:56.829655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:08.739 [2024-09-28 23:37:56.829662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:08.739 [2024-09-28 23:37:56.829669] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:08.739 [2024-09-28 23:37:56.829676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.739 [2024-09-28 23:37:56.829684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:08.739 [2024-09-28 23:37:56.829691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:08.739 [2024-09-28 23:37:56.829698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:08.739 [2024-09-28 23:37:56.829705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:08.739 [2024-09-28 23:37:56.829712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.739 [2024-09-28 23:37:56.829721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:08.739 [2024-09-28 23:37:56.829729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:17:08.739 [2024-09-28 23:37:56.829736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.739 [2024-09-28 23:37:56.867653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.739 [2024-09-28 23:37:56.867706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.739 [2024-09-28 23:37:56.867720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.861 ms 00:17:08.739 [2024-09-28 23:37:56.867728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.739 [2024-09-28 23:37:56.867890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.739 [2024-09-28 23:37:56.867903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:08.739 [2024-09-28 23:37:56.867912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:08.739 [2024-09-28 23:37:56.867920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:56.909444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:56.909489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.998 [2024-09-28 23:37:56.909502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.499 ms 00:17:08.998 [2024-09-28 23:37:56.909520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:56.910147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:56.910175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.998 [2024-09-28 23:37:56.910185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:08.998 [2024-09-28 23:37:56.910194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:56.910594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:56.910621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.998 [2024-09-28 23:37:56.910631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:17:08.998 [2024-09-28 23:37:56.910638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:56.910766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:56.910781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.998 [2024-09-28 23:37:56.910790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:17:08.998 [2024-09-28 23:37:56.910798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:56.924879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:56.924916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.998 [2024-09-28 23:37:56.924927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.060 ms 00:17:08.998 [2024-09-28 23:37:56.924936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:56.937358] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:08.998 [2024-09-28 23:37:56.937394] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:08.998 [2024-09-28 23:37:56.937406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:56.937414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:08.998 [2024-09-28 23:37:56.937422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.357 ms 00:17:08.998 [2024-09-28 23:37:56.937430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:56.961936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:56.961986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:08.998 [2024-09-28 23:37:56.962005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.429 ms 00:17:08.998 [2024-09-28 23:37:56.962014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:56.973254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:56.973293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:08.998 [2024-09-28 23:37:56.973304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.149 ms 00:17:08.998 [2024-09-28 23:37:56.973311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:56.984478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:56.984532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:08.998 [2024-09-28 23:37:56.984542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.097 ms 00:17:08.998 [2024-09-28 23:37:56.984550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:56.985158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:56.985183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:08.998 [2024-09-28 23:37:56.985192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:17:08.998 [2024-09-28 23:37:56.985199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:57.038932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:57.038980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:08.998 [2024-09-28 23:37:57.038993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.709 ms 00:17:08.998 [2024-09-28 23:37:57.039001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:57.049192] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:08.998 [2024-09-28 23:37:57.062647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:57.062685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:08.998 [2024-09-28 23:37:57.062696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.537 ms 00:17:08.998 [2024-09-28 23:37:57.062703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:57.062783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:57.062794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:08.998 [2024-09-28 23:37:57.062802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:08.998 [2024-09-28 23:37:57.062810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:57.062856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:57.062868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:08.998 [2024-09-28 23:37:57.062876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:08.998 [2024-09-28 23:37:57.062883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:57.062906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:57.062914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:08.998 [2024-09-28 23:37:57.062922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:08.998 [2024-09-28 23:37:57.062929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:57.062958] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:08.998 [2024-09-28 23:37:57.062966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:57.062976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:08.998 [2024-09-28 23:37:57.062983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:08.998 [2024-09-28 23:37:57.062990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:57.085945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:57.085992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:08.998 [2024-09-28 23:37:57.086003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.933 ms 00:17:08.998 [2024-09-28 23:37:57.086011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:57.086102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.998 [2024-09-28 23:37:57.086113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:08.998 [2024-09-28 23:37:57.086122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:08.998 [2024-09-28 23:37:57.086129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.998 [2024-09-28 23:37:57.087237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:08.998 [2024-09-28 23:37:57.090110] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.703 ms, result 0 00:17:08.998 [2024-09-28 23:37:57.090737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:08.998 [2024-09-28 23:37:57.103670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:09.258  Copying: 4096/4096 [kB] (average 39 MBps)[2024-09-28 23:37:57.207427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.258 [2024-09-28 23:37:57.216211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.258 [2024-09-28 23:37:57.216247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:09.258 [2024-09-28 23:37:57.216259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:09.258 [2024-09-28 23:37:57.216267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.258 [2024-09-28 23:37:57.216288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:09.258 [2024-09-28 23:37:57.218868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.258 [2024-09-28 23:37:57.218898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:09.258 [2024-09-28 23:37:57.218908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:17:09.258 [2024-09-28 23:37:57.218915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.258 [2024-09-28 23:37:57.220900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.258 [2024-09-28 23:37:57.220937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:09.258 [2024-09-28 23:37:57.220946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.963 ms 00:17:09.258 [2024-09-28 23:37:57.220953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.258 [2024-09-28 23:37:57.224887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.258 [2024-09-28 23:37:57.224916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:09.258 [2024-09-28 23:37:57.224924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.919 ms 00:17:09.258 [2024-09-28 23:37:57.224931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.258 [2024-09-28 23:37:57.231845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.258 [2024-09-28 23:37:57.231875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:09.258 [2024-09-28 23:37:57.231889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.889 ms 00:17:09.258 [2024-09-28 23:37:57.231897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.258 [2024-09-28 23:37:57.254836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.258 [2024-09-28 23:37:57.254868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:09.258 [2024-09-28 23:37:57.254878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.887 ms 00:17:09.258 [2024-09-28 23:37:57.254885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.258 [2024-09-28 23:37:57.268835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.258 [2024-09-28 23:37:57.268867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:09.258 [2024-09-28 23:37:57.268878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.917 ms 00:17:09.258 [2024-09-28 23:37:57.268886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.258 [2024-09-28 23:37:57.269013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.258 [2024-09-28 23:37:57.269023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:09.258 [2024-09-28 23:37:57.269032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:09.259 [2024-09-28 23:37:57.269039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.259 [2024-09-28 23:37:57.292178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.259 [2024-09-28 23:37:57.292210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:09.259 [2024-09-28 23:37:57.292219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.120 ms 00:17:09.259 [2024-09-28 23:37:57.292227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.259 [2024-09-28 23:37:57.314706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.259 [2024-09-28 23:37:57.314736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:09.259 [2024-09-28 23:37:57.314745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.447 ms 00:17:09.259 [2024-09-28 23:37:57.314752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.259 [2024-09-28 23:37:57.337084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.259 [2024-09-28 23:37:57.337116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:09.259 [2024-09-28 23:37:57.337126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.300 ms 00:17:09.259 [2024-09-28 23:37:57.337133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.259 [2024-09-28 23:37:57.359094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.259 [2024-09-28 23:37:57.359125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:09.259 [2024-09-28 23:37:57.359135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.903 ms 00:17:09.259 [2024-09-28 23:37:57.359142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.259 [2024-09-28 23:37:57.359174] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:09.259 [2024-09-28 23:37:57.359187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:09.259 [2024-09-28 23:37:57.359759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:09.260 [2024-09-28 23:37:57.359956] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:09.260 [2024-09-28 23:37:57.359964] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 654be1da-fb74-4145-8224-07c374972a2a 00:17:09.260 [2024-09-28 23:37:57.359971] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:09.260 [2024-09-28 23:37:57.359979] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:09.260 [2024-09-28 23:37:57.359989] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:09.260 [2024-09-28 23:37:57.359996] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:09.260 [2024-09-28 23:37:57.360002] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:09.260 [2024-09-28 23:37:57.360010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:09.260 [2024-09-28 23:37:57.360016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:09.260 [2024-09-28 23:37:57.360023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:09.260 [2024-09-28 23:37:57.360029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:09.260 [2024-09-28 23:37:57.360036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.260 [2024-09-28 23:37:57.360043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:09.260 [2024-09-28 23:37:57.360051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:17:09.260 [2024-09-28 23:37:57.360058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.260 [2024-09-28 23:37:57.372227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.260 [2024-09-28 23:37:57.372263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:09.260 [2024-09-28 23:37:57.372273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.153 ms 00:17:09.260 [2024-09-28 23:37:57.372280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.260 [2024-09-28 23:37:57.372636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.260 [2024-09-28 23:37:57.372656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:09.260 [2024-09-28 23:37:57.372665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:17:09.260 [2024-09-28 23:37:57.372672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.260 [2024-09-28 23:37:57.402477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.260 [2024-09-28 23:37:57.402531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.260 [2024-09-28 23:37:57.402541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.260 [2024-09-28 23:37:57.402549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.260 [2024-09-28 23:37:57.402618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.260 [2024-09-28 23:37:57.402626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.260 [2024-09-28 23:37:57.402633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.260 [2024-09-28 23:37:57.402641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.260 [2024-09-28 23:37:57.402679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.260 [2024-09-28 23:37:57.402691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.260 [2024-09-28 23:37:57.402699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.260 [2024-09-28 23:37:57.402706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.260 [2024-09-28 23:37:57.402722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.260 [2024-09-28 23:37:57.402730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.260 [2024-09-28 23:37:57.402738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.260 [2024-09-28 23:37:57.402744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.519 [2024-09-28 23:37:57.479076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.519 [2024-09-28 23:37:57.479122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.519 [2024-09-28 23:37:57.479134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.519 [2024-09-28 23:37:57.479141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.519 [2024-09-28 23:37:57.542218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.519 [2024-09-28 23:37:57.542265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.519 [2024-09-28 23:37:57.542275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.519 [2024-09-28 23:37:57.542283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.519 [2024-09-28 23:37:57.542345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.519 [2024-09-28 23:37:57.542354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.519 [2024-09-28 23:37:57.542366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.519 [2024-09-28 23:37:57.542373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.519 [2024-09-28 23:37:57.542400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.519 [2024-09-28 23:37:57.542408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.519 [2024-09-28 23:37:57.542416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.519 [2024-09-28 23:37:57.542423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.519 [2024-09-28 23:37:57.542530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.519 [2024-09-28 23:37:57.542541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.519 [2024-09-28 23:37:57.542549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.519 [2024-09-28 23:37:57.542559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.519 [2024-09-28 23:37:57.542593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.519 [2024-09-28 23:37:57.542602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:09.519 [2024-09-28 23:37:57.542609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.519 [2024-09-28 23:37:57.542616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.519 [2024-09-28 23:37:57.542652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.519 [2024-09-28 23:37:57.542660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.519 [2024-09-28 23:37:57.542667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.519 [2024-09-28 23:37:57.542676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.519 [2024-09-28 23:37:57.542715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.519 [2024-09-28 23:37:57.542724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.519 [2024-09-28 23:37:57.542731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.520 [2024-09-28 23:37:57.542739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.520 [2024-09-28 23:37:57.542868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 326.641 ms, result 0 00:17:10.455 00:17:10.455 00:17:10.455 23:37:58 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74453 00:17:10.455 23:37:58 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74453 00:17:10.455 23:37:58 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:10.455 23:37:58 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74453 ']' 00:17:10.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.455 23:37:58 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.455 23:37:58 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:10.455 23:37:58 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.455 23:37:58 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:10.455 23:37:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:10.455 [2024-09-28 23:37:58.407835] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:10.455 [2024-09-28 23:37:58.407958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74453 ] 00:17:10.455 [2024-09-28 23:37:58.558436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.713 [2024-09-28 23:37:58.733398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.296 23:37:59 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:11.296 23:37:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:11.296 23:37:59 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:11.555 [2024-09-28 23:37:59.515973] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:11.555 [2024-09-28 23:37:59.516035] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:11.555 [2024-09-28 23:37:59.681365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.555 [2024-09-28 23:37:59.681414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:11.555 [2024-09-28 23:37:59.681429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:11.555 [2024-09-28 23:37:59.681441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.555 [2024-09-28 23:37:59.684059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.555 [2024-09-28 23:37:59.684094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:11.555 [2024-09-28 23:37:59.684107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:17:11.555 [2024-09-28 23:37:59.684115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.555 [2024-09-28 23:37:59.684194] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:11.555 [2024-09-28 23:37:59.684871] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:11.555 [2024-09-28 23:37:59.684909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.555 [2024-09-28 23:37:59.684916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:11.555 [2024-09-28 23:37:59.684926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:17:11.555 [2024-09-28 23:37:59.684934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.555 [2024-09-28 23:37:59.686304] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:11.555 [2024-09-28 23:37:59.698625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.555 [2024-09-28 23:37:59.698667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:11.555 [2024-09-28 23:37:59.698679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.325 ms 00:17:11.555 [2024-09-28 23:37:59.698689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.555 [2024-09-28 23:37:59.698778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.555 [2024-09-28 23:37:59.698792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:11.555 [2024-09-28 23:37:59.698801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:11.555 [2024-09-28 23:37:59.698810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.555 [2024-09-28 23:37:59.703398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.555 [2024-09-28 23:37:59.703436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:11.555 [2024-09-28 23:37:59.703446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.542 ms 00:17:11.555 [2024-09-28 23:37:59.703454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.555 [2024-09-28 23:37:59.703574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.555 [2024-09-28 23:37:59.703588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:11.555 [2024-09-28 23:37:59.703597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:11.555 [2024-09-28 23:37:59.703605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.555 [2024-09-28 23:37:59.703630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.555 [2024-09-28 23:37:59.703641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:11.555 [2024-09-28 23:37:59.703649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:11.555 [2024-09-28 23:37:59.703658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.555 [2024-09-28 23:37:59.703681] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:11.555 [2024-09-28 23:37:59.706985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.555 [2024-09-28 23:37:59.707015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:11.555 [2024-09-28 23:37:59.707026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.308 ms 00:17:11.555 [2024-09-28 23:37:59.707035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.555 [2024-09-28 23:37:59.707071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.555 [2024-09-28 23:37:59.707079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:11.555 [2024-09-28 23:37:59.707088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:11.555 [2024-09-28 23:37:59.707095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.555 [2024-09-28 23:37:59.707116] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:11.556 [2024-09-28 23:37:59.707132] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:11.556 [2024-09-28 23:37:59.707171] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:11.556 [2024-09-28 23:37:59.707188] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:11.556 [2024-09-28 23:37:59.707293] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:11.556 [2024-09-28 23:37:59.707310] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:11.556 [2024-09-28 23:37:59.707323] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:11.556 [2024-09-28 23:37:59.707333] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:11.556 [2024-09-28 23:37:59.707343] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:11.556 [2024-09-28 23:37:59.707350] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:11.556 [2024-09-28 23:37:59.707360] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:11.556 [2024-09-28 23:37:59.707367] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:11.556 [2024-09-28 23:37:59.707377] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:11.556 [2024-09-28 23:37:59.707386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.556 [2024-09-28 23:37:59.707395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:11.556 [2024-09-28 23:37:59.707403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:17:11.556 [2024-09-28 23:37:59.707412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.556 [2024-09-28 23:37:59.707498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.556 [2024-09-28 23:37:59.707528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:11.556 [2024-09-28 23:37:59.707540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:11.556 [2024-09-28 23:37:59.707553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.556 [2024-09-28 23:37:59.707668] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:11.556 [2024-09-28 23:37:59.707682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:11.556 [2024-09-28 23:37:59.707691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:11.556 [2024-09-28 23:37:59.707700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:11.556 [2024-09-28 23:37:59.707716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:11.556 [2024-09-28 23:37:59.707735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:11.556 [2024-09-28 23:37:59.707742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:11.556 [2024-09-28 23:37:59.707757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:11.556 [2024-09-28 23:37:59.707766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:11.556 [2024-09-28 23:37:59.707772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:11.556 [2024-09-28 23:37:59.707780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:11.556 [2024-09-28 23:37:59.707786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:11.556 [2024-09-28 23:37:59.707794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:11.556 [2024-09-28 23:37:59.707809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:11.556 [2024-09-28 23:37:59.707820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:11.556 [2024-09-28 23:37:59.707835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.556 [2024-09-28 23:37:59.707850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:11.556 [2024-09-28 23:37:59.707861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.556 [2024-09-28 23:37:59.707876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:11.556 [2024-09-28 23:37:59.707883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.556 [2024-09-28 23:37:59.707896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:11.556 [2024-09-28 23:37:59.707904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:11.556 [2024-09-28 23:37:59.707919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:11.556 [2024-09-28 23:37:59.707926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:11.556 [2024-09-28 23:37:59.707940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:11.556 [2024-09-28 23:37:59.707948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:11.556 [2024-09-28 23:37:59.707955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:11.556 [2024-09-28 23:37:59.707963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:11.556 [2024-09-28 23:37:59.707969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:11.556 [2024-09-28 23:37:59.707978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.556 [2024-09-28 23:37:59.707985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:11.556 [2024-09-28 23:37:59.707993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:11.556 [2024-09-28 23:37:59.708000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.556 [2024-09-28 23:37:59.708007] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:11.556 [2024-09-28 23:37:59.708015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:11.556 [2024-09-28 23:37:59.708023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:11.556 [2024-09-28 23:37:59.708030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.556 [2024-09-28 23:37:59.708039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:11.556 [2024-09-28 23:37:59.708045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:11.556 [2024-09-28 23:37:59.708053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:11.556 [2024-09-28 23:37:59.708060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:11.556 [2024-09-28 23:37:59.708067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:11.556 [2024-09-28 23:37:59.708074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:11.556 [2024-09-28 23:37:59.708084] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:11.556 [2024-09-28 23:37:59.708092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:11.556 [2024-09-28 23:37:59.708109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:11.556 [2024-09-28 23:37:59.708117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:11.556 [2024-09-28 23:37:59.708127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:11.556 [2024-09-28 23:37:59.708135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:11.556 [2024-09-28 23:37:59.708143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:11.556 [2024-09-28 23:37:59.708150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:11.556 [2024-09-28 23:37:59.708158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:11.556 [2024-09-28 23:37:59.708165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:11.556 [2024-09-28 23:37:59.708174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:11.556 [2024-09-28 23:37:59.708181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:11.556 [2024-09-28 23:37:59.708190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:11.556 [2024-09-28 23:37:59.708196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:11.556 [2024-09-28 23:37:59.708205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:11.556 [2024-09-28 23:37:59.708212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:11.556 [2024-09-28 23:37:59.708221] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:11.556 [2024-09-28 23:37:59.708228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:11.556 [2024-09-28 23:37:59.708240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:11.556 [2024-09-28 23:37:59.708247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:11.556 [2024-09-28 23:37:59.708256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:11.556 [2024-09-28 23:37:59.708263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:11.556 [2024-09-28 23:37:59.708272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.556 [2024-09-28 23:37:59.708279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:11.556 [2024-09-28 23:37:59.708287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:17:11.557 [2024-09-28 23:37:59.708294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.733731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.733768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:11.816 [2024-09-28 23:37:59.733779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.378 ms 00:17:11.816 [2024-09-28 23:37:59.733787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.733903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.733912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:11.816 [2024-09-28 23:37:59.733921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:11.816 [2024-09-28 23:37:59.733929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.772112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.772151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:11.816 [2024-09-28 23:37:59.772166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.158 ms 00:17:11.816 [2024-09-28 23:37:59.772174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.772245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.772256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:11.816 [2024-09-28 23:37:59.772266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:11.816 [2024-09-28 23:37:59.772275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.772616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.772638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:11.816 [2024-09-28 23:37:59.772649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:17:11.816 [2024-09-28 23:37:59.772656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.772780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.772795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:11.816 [2024-09-28 23:37:59.772805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:11.816 [2024-09-28 23:37:59.772812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.789277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.789324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:11.816 [2024-09-28 23:37:59.789335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.424 ms 00:17:11.816 [2024-09-28 23:37:59.789345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.801806] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:11.816 [2024-09-28 23:37:59.801842] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:11.816 [2024-09-28 23:37:59.801856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.801865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:11.816 [2024-09-28 23:37:59.801875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.411 ms 00:17:11.816 [2024-09-28 23:37:59.801883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.826147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.826184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:11.816 [2024-09-28 23:37:59.826196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.191 ms 00:17:11.816 [2024-09-28 23:37:59.826209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.837513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.837547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:11.816 [2024-09-28 23:37:59.837559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.228 ms 00:17:11.816 [2024-09-28 23:37:59.837566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.849089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.849121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:11.816 [2024-09-28 23:37:59.849132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.458 ms 00:17:11.816 [2024-09-28 23:37:59.849139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.816 [2024-09-28 23:37:59.849784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.816 [2024-09-28 23:37:59.849810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:11.816 [2024-09-28 23:37:59.849820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:17:11.817 [2024-09-28 23:37:59.849829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.817 [2024-09-28 23:37:59.903737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.817 [2024-09-28 23:37:59.903790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:11.817 [2024-09-28 23:37:59.903805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.882 ms 00:17:11.817 [2024-09-28 23:37:59.903815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.817 [2024-09-28 23:37:59.914246] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:11.817 [2024-09-28 23:37:59.928023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.817 [2024-09-28 23:37:59.928066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:11.817 [2024-09-28 23:37:59.928077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.115 ms 00:17:11.817 [2024-09-28 23:37:59.928087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.817 [2024-09-28 23:37:59.928163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.817 [2024-09-28 23:37:59.928175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:11.817 [2024-09-28 23:37:59.928183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:11.817 [2024-09-28 23:37:59.928192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.817 [2024-09-28 23:37:59.928239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.817 [2024-09-28 23:37:59.928249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:11.817 [2024-09-28 23:37:59.928257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:11.817 [2024-09-28 23:37:59.928266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.817 [2024-09-28 23:37:59.928288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.817 [2024-09-28 23:37:59.928297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:11.817 [2024-09-28 23:37:59.928306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:11.817 [2024-09-28 23:37:59.928317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.817 [2024-09-28 23:37:59.928346] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:11.817 [2024-09-28 23:37:59.928360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.817 [2024-09-28 23:37:59.928367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:11.817 [2024-09-28 23:37:59.928376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:11.817 [2024-09-28 23:37:59.928383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.817 [2024-09-28 23:37:59.951684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.817 [2024-09-28 23:37:59.951720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:11.817 [2024-09-28 23:37:59.951732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.276 ms 00:17:11.817 [2024-09-28 23:37:59.951743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.817 [2024-09-28 23:37:59.951841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.817 [2024-09-28 23:37:59.951857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:11.817 [2024-09-28 23:37:59.951868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:11.817 [2024-09-28 23:37:59.951875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.817 [2024-09-28 23:37:59.952675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:11.817 [2024-09-28 23:37:59.955708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 271.015 ms, result 0 00:17:11.817 [2024-09-28 23:37:59.956730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:11.817 Some configs were skipped because the RPC state that can call them passed over. 00:17:12.076 23:37:59 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:12.076 [2024-09-28 23:38:00.187210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.076 [2024-09-28 23:38:00.187267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:12.076 [2024-09-28 23:38:00.187280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:17:12.076 [2024-09-28 23:38:00.187290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.076 [2024-09-28 23:38:00.187323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.849 ms, result 0 00:17:12.076 true 00:17:12.076 23:38:00 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:12.334 [2024-09-28 23:38:00.387157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.334 [2024-09-28 23:38:00.387204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:12.334 [2024-09-28 23:38:00.387217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:17:12.334 [2024-09-28 23:38:00.387224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.334 [2024-09-28 23:38:00.387260] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.569 ms, result 0 00:17:12.334 true 00:17:12.334 23:38:00 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74453 00:17:12.334 23:38:00 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74453 ']' 00:17:12.334 23:38:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74453 00:17:12.334 23:38:00 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:12.334 23:38:00 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.334 23:38:00 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74453 00:17:12.334 killing process with pid 74453 00:17:12.335 23:38:00 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.335 23:38:00 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.335 23:38:00 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74453' 00:17:12.335 23:38:00 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74453 00:17:12.335 23:38:00 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74453 00:17:13.271 [2024-09-28 23:38:01.088801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.271 [2024-09-28 23:38:01.088853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:13.271 [2024-09-28 23:38:01.088863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:13.271 [2024-09-28 23:38:01.088870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.271 [2024-09-28 23:38:01.088888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:13.271 [2024-09-28 23:38:01.091018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.271 [2024-09-28 23:38:01.091046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:13.271 [2024-09-28 23:38:01.091058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:17:13.271 [2024-09-28 23:38:01.091064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.271 [2024-09-28 23:38:01.091282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.271 [2024-09-28 23:38:01.091299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:13.271 [2024-09-28 23:38:01.091308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:17:13.271 [2024-09-28 23:38:01.091314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.271 [2024-09-28 23:38:01.094581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.271 [2024-09-28 23:38:01.094608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:13.271 [2024-09-28 23:38:01.094617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:17:13.271 [2024-09-28 23:38:01.094623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.272 [2024-09-28 23:38:01.099823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.272 [2024-09-28 23:38:01.099850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:13.272 [2024-09-28 23:38:01.099862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.170 ms 00:17:13.272 [2024-09-28 23:38:01.099870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.272 [2024-09-28 23:38:01.107145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.272 [2024-09-28 23:38:01.107174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:13.272 [2024-09-28 23:38:01.107184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.232 ms 00:17:13.272 [2024-09-28 23:38:01.107190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.272 [2024-09-28 23:38:01.113580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.272 [2024-09-28 23:38:01.113609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:13.272 [2024-09-28 23:38:01.113619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.356 ms 00:17:13.272 [2024-09-28 23:38:01.113633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.272 [2024-09-28 23:38:01.113744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.272 [2024-09-28 23:38:01.113756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:13.272 [2024-09-28 23:38:01.113764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:13.272 [2024-09-28 23:38:01.113772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.272 [2024-09-28 23:38:01.121662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.272 [2024-09-28 23:38:01.121688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:13.272 [2024-09-28 23:38:01.121696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.871 ms 00:17:13.272 [2024-09-28 23:38:01.121702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.272 [2024-09-28 23:38:01.129259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.272 [2024-09-28 23:38:01.129288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:13.272 [2024-09-28 23:38:01.129299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.526 ms 00:17:13.272 [2024-09-28 23:38:01.129305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.272 [2024-09-28 23:38:01.136484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.272 [2024-09-28 23:38:01.136521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:13.272 [2024-09-28 23:38:01.136529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.148 ms 00:17:13.272 [2024-09-28 23:38:01.136535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.272 [2024-09-28 23:38:01.143615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.272 [2024-09-28 23:38:01.143643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:13.272 [2024-09-28 23:38:01.143652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.026 ms 00:17:13.272 [2024-09-28 23:38:01.143657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.272 [2024-09-28 23:38:01.143686] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:13.272 [2024-09-28 23:38:01.143698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.143998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.144005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.144011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.144017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.144023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.144030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.144036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.144044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.144049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:13.272 [2024-09-28 23:38:01.144056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:13.273 [2024-09-28 23:38:01.144348] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:13.273 [2024-09-28 23:38:01.144357] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 654be1da-fb74-4145-8224-07c374972a2a 00:17:13.273 [2024-09-28 23:38:01.144362] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:13.273 [2024-09-28 23:38:01.144369] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:13.273 [2024-09-28 23:38:01.144374] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:13.273 [2024-09-28 23:38:01.144381] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:13.273 [2024-09-28 23:38:01.144391] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:13.273 [2024-09-28 23:38:01.144398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:13.273 [2024-09-28 23:38:01.144405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:13.273 [2024-09-28 23:38:01.144411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:13.273 [2024-09-28 23:38:01.144416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:13.273 [2024-09-28 23:38:01.144422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.273 [2024-09-28 23:38:01.144428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:13.273 [2024-09-28 23:38:01.144436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:17:13.273 [2024-09-28 23:38:01.144441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.273 [2024-09-28 23:38:01.154159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.273 [2024-09-28 23:38:01.154186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:13.273 [2024-09-28 23:38:01.154196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.700 ms 00:17:13.273 [2024-09-28 23:38:01.154202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.273 [2024-09-28 23:38:01.154527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.273 [2024-09-28 23:38:01.154552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:13.273 [2024-09-28 23:38:01.154561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:17:13.273 [2024-09-28 23:38:01.154566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.273 [2024-09-28 23:38:01.185579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.273 [2024-09-28 23:38:01.185613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:13.273 [2024-09-28 23:38:01.185622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.273 [2024-09-28 23:38:01.185630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.273 [2024-09-28 23:38:01.185713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.273 [2024-09-28 23:38:01.185721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:13.273 [2024-09-28 23:38:01.185728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.273 [2024-09-28 23:38:01.185734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.273 [2024-09-28 23:38:01.185770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.273 [2024-09-28 23:38:01.185777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:13.273 [2024-09-28 23:38:01.185786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.273 [2024-09-28 23:38:01.185792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.273 [2024-09-28 23:38:01.185808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.273 [2024-09-28 23:38:01.185814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:13.273 [2024-09-28 23:38:01.185820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.273 [2024-09-28 23:38:01.185826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.273 [2024-09-28 23:38:01.246240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.273 [2024-09-28 23:38:01.246279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:13.273 [2024-09-28 23:38:01.246289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.273 [2024-09-28 23:38:01.246297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.273 [2024-09-28 23:38:01.295303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.273 [2024-09-28 23:38:01.295345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:13.273 [2024-09-28 23:38:01.295356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.273 [2024-09-28 23:38:01.295363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.273 [2024-09-28 23:38:01.296338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.273 [2024-09-28 23:38:01.296366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:13.273 [2024-09-28 23:38:01.296377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.273 [2024-09-28 23:38:01.296383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.273 [2024-09-28 23:38:01.296409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.273 [2024-09-28 23:38:01.296417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:13.274 [2024-09-28 23:38:01.296425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.274 [2024-09-28 23:38:01.296430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.274 [2024-09-28 23:38:01.296505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.274 [2024-09-28 23:38:01.296529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:13.274 [2024-09-28 23:38:01.296537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.274 [2024-09-28 23:38:01.296543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.274 [2024-09-28 23:38:01.296573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.274 [2024-09-28 23:38:01.296580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:13.274 [2024-09-28 23:38:01.296589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.274 [2024-09-28 23:38:01.296594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.274 [2024-09-28 23:38:01.296625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.274 [2024-09-28 23:38:01.296631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:13.274 [2024-09-28 23:38:01.296640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.274 [2024-09-28 23:38:01.296645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.274 [2024-09-28 23:38:01.296679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.274 [2024-09-28 23:38:01.296688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:13.274 [2024-09-28 23:38:01.296695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.274 [2024-09-28 23:38:01.296701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.274 [2024-09-28 23:38:01.296803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 207.987 ms, result 0 00:17:14.209 23:38:02 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:14.209 [2024-09-28 23:38:02.123619] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:14.209 [2024-09-28 23:38:02.123910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74506 ] 00:17:14.209 [2024-09-28 23:38:02.274196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.467 [2024-09-28 23:38:02.450834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.727 [2024-09-28 23:38:02.699071] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:14.727 [2024-09-28 23:38:02.699133] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:14.727 [2024-09-28 23:38:02.853390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.853438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:14.727 [2024-09-28 23:38:02.853453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:14.727 [2024-09-28 23:38:02.853461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.856040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.856077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:14.727 [2024-09-28 23:38:02.856086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:17:14.727 [2024-09-28 23:38:02.856096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.856162] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:14.727 [2024-09-28 23:38:02.856815] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:14.727 [2024-09-28 23:38:02.856841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.856851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:14.727 [2024-09-28 23:38:02.856859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:17:14.727 [2024-09-28 23:38:02.856866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.858195] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:14.727 [2024-09-28 23:38:02.870282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.870322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:14.727 [2024-09-28 23:38:02.870334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.088 ms 00:17:14.727 [2024-09-28 23:38:02.870343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.870428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.870439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:14.727 [2024-09-28 23:38:02.870450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:14.727 [2024-09-28 23:38:02.870457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.875052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.875082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:14.727 [2024-09-28 23:38:02.875092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.555 ms 00:17:14.727 [2024-09-28 23:38:02.875099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.875181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.875192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:14.727 [2024-09-28 23:38:02.875200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:14.727 [2024-09-28 23:38:02.875207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.875231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.875239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:14.727 [2024-09-28 23:38:02.875247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:14.727 [2024-09-28 23:38:02.875254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.875275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:14.727 [2024-09-28 23:38:02.878533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.878561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:14.727 [2024-09-28 23:38:02.878569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:17:14.727 [2024-09-28 23:38:02.878577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.878610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.878621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:14.727 [2024-09-28 23:38:02.878628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:14.727 [2024-09-28 23:38:02.878636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.878652] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:14.727 [2024-09-28 23:38:02.878668] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:14.727 [2024-09-28 23:38:02.878701] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:14.727 [2024-09-28 23:38:02.878716] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:14.727 [2024-09-28 23:38:02.878820] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:14.727 [2024-09-28 23:38:02.878837] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:14.727 [2024-09-28 23:38:02.878848] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:14.727 [2024-09-28 23:38:02.878858] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:14.727 [2024-09-28 23:38:02.878867] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:14.727 [2024-09-28 23:38:02.878875] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:14.727 [2024-09-28 23:38:02.878882] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:14.727 [2024-09-28 23:38:02.878889] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:14.727 [2024-09-28 23:38:02.878896] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:14.727 [2024-09-28 23:38:02.878903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.878912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:14.727 [2024-09-28 23:38:02.878920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:14.727 [2024-09-28 23:38:02.878927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.879013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.727 [2024-09-28 23:38:02.879030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:14.727 [2024-09-28 23:38:02.879037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:14.727 [2024-09-28 23:38:02.879044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.727 [2024-09-28 23:38:02.879142] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:14.727 [2024-09-28 23:38:02.879152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:14.727 [2024-09-28 23:38:02.879162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.727 [2024-09-28 23:38:02.879169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.727 [2024-09-28 23:38:02.879178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:14.727 [2024-09-28 23:38:02.879184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:14.727 [2024-09-28 23:38:02.879190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:14.727 [2024-09-28 23:38:02.879197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:14.727 [2024-09-28 23:38:02.879204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:14.727 [2024-09-28 23:38:02.879210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.727 [2024-09-28 23:38:02.879217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:14.727 [2024-09-28 23:38:02.879229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:14.727 [2024-09-28 23:38:02.879235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.727 [2024-09-28 23:38:02.879241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:14.727 [2024-09-28 23:38:02.879248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:14.727 [2024-09-28 23:38:02.879254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.727 [2024-09-28 23:38:02.879260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:14.727 [2024-09-28 23:38:02.879267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:14.727 [2024-09-28 23:38:02.879274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.727 [2024-09-28 23:38:02.879280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:14.727 [2024-09-28 23:38:02.879287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:14.727 [2024-09-28 23:38:02.879294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.728 [2024-09-28 23:38:02.879301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:14.728 [2024-09-28 23:38:02.879308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:14.728 [2024-09-28 23:38:02.879314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.728 [2024-09-28 23:38:02.879320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:14.728 [2024-09-28 23:38:02.879326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:14.728 [2024-09-28 23:38:02.879332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.728 [2024-09-28 23:38:02.879338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:14.728 [2024-09-28 23:38:02.879345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:14.728 [2024-09-28 23:38:02.879351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.728 [2024-09-28 23:38:02.879357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:14.728 [2024-09-28 23:38:02.879363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:14.728 [2024-09-28 23:38:02.879370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.728 [2024-09-28 23:38:02.879376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:14.728 [2024-09-28 23:38:02.879382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:14.728 [2024-09-28 23:38:02.879388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.728 [2024-09-28 23:38:02.879395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:14.728 [2024-09-28 23:38:02.879402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:14.728 [2024-09-28 23:38:02.879408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.728 [2024-09-28 23:38:02.879414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:14.728 [2024-09-28 23:38:02.879421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:14.728 [2024-09-28 23:38:02.879427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.728 [2024-09-28 23:38:02.879433] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:14.728 [2024-09-28 23:38:02.879440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:14.728 [2024-09-28 23:38:02.879447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.728 [2024-09-28 23:38:02.879454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.728 [2024-09-28 23:38:02.879462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:14.728 [2024-09-28 23:38:02.879469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:14.728 [2024-09-28 23:38:02.879475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:14.728 [2024-09-28 23:38:02.879481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:14.728 [2024-09-28 23:38:02.879487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:14.728 [2024-09-28 23:38:02.879494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:14.728 [2024-09-28 23:38:02.879503] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:14.728 [2024-09-28 23:38:02.879525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.728 [2024-09-28 23:38:02.879534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:14.728 [2024-09-28 23:38:02.879541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:14.728 [2024-09-28 23:38:02.879548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:14.728 [2024-09-28 23:38:02.879555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:14.728 [2024-09-28 23:38:02.879562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:14.728 [2024-09-28 23:38:02.879569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:14.728 [2024-09-28 23:38:02.879576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:14.728 [2024-09-28 23:38:02.879582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:14.728 [2024-09-28 23:38:02.879589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:14.728 [2024-09-28 23:38:02.879596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:14.728 [2024-09-28 23:38:02.879604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:14.728 [2024-09-28 23:38:02.879610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:14.728 [2024-09-28 23:38:02.879617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:14.728 [2024-09-28 23:38:02.879625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:14.728 [2024-09-28 23:38:02.879632] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:14.728 [2024-09-28 23:38:02.879640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.728 [2024-09-28 23:38:02.879647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:14.728 [2024-09-28 23:38:02.879654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:14.728 [2024-09-28 23:38:02.879661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:14.728 [2024-09-28 23:38:02.879668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:14.728 [2024-09-28 23:38:02.879676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.728 [2024-09-28 23:38:02.879684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:14.728 [2024-09-28 23:38:02.879692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:17:14.728 [2024-09-28 23:38:02.879698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:02.913825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:02.913875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:14.988 [2024-09-28 23:38:02.913889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.063 ms 00:17:14.988 [2024-09-28 23:38:02.913899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:02.914058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:02.914085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:14.988 [2024-09-28 23:38:02.914096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:14.988 [2024-09-28 23:38:02.914109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:02.943980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:02.944013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:14.988 [2024-09-28 23:38:02.944022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.844 ms 00:17:14.988 [2024-09-28 23:38:02.944030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:02.944088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:02.944097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:14.988 [2024-09-28 23:38:02.944106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:14.988 [2024-09-28 23:38:02.944113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:02.944418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:02.944439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:14.988 [2024-09-28 23:38:02.944449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:14.988 [2024-09-28 23:38:02.944456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:02.944594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:02.944609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:14.988 [2024-09-28 23:38:02.944617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:17:14.988 [2024-09-28 23:38:02.944625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:02.957081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:02.957113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:14.988 [2024-09-28 23:38:02.957122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.415 ms 00:17:14.988 [2024-09-28 23:38:02.957129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:02.969458] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:14.988 [2024-09-28 23:38:02.969493] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:14.988 [2024-09-28 23:38:02.969503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:02.969519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:14.988 [2024-09-28 23:38:02.969527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.283 ms 00:17:14.988 [2024-09-28 23:38:02.969533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:02.996154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:02.996186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:14.988 [2024-09-28 23:38:02.996200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.554 ms 00:17:14.988 [2024-09-28 23:38:02.996209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.007754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.007784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:14.988 [2024-09-28 23:38:03.007793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.492 ms 00:17:14.988 [2024-09-28 23:38:03.007800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.019019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.019048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:14.988 [2024-09-28 23:38:03.019057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.158 ms 00:17:14.988 [2024-09-28 23:38:03.019063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.019690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.019714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:14.988 [2024-09-28 23:38:03.019724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:17:14.988 [2024-09-28 23:38:03.019731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.073936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.073982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:14.988 [2024-09-28 23:38:03.073995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.182 ms 00:17:14.988 [2024-09-28 23:38:03.074003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.084290] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:14.988 [2024-09-28 23:38:03.097439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.097475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:14.988 [2024-09-28 23:38:03.097487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.340 ms 00:17:14.988 [2024-09-28 23:38:03.097494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.097579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.097591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:14.988 [2024-09-28 23:38:03.097599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:14.988 [2024-09-28 23:38:03.097606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.097655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.097666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:14.988 [2024-09-28 23:38:03.097674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:14.988 [2024-09-28 23:38:03.097681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.097701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.097709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:14.988 [2024-09-28 23:38:03.097717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:14.988 [2024-09-28 23:38:03.097725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.097756] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:14.988 [2024-09-28 23:38:03.097772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.097781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:14.988 [2024-09-28 23:38:03.097789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:14.988 [2024-09-28 23:38:03.097796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.120558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.120603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:14.988 [2024-09-28 23:38:03.120614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.743 ms 00:17:14.988 [2024-09-28 23:38:03.120622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.120708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.988 [2024-09-28 23:38:03.120719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:14.988 [2024-09-28 23:38:03.120727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:14.988 [2024-09-28 23:38:03.120734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.988 [2024-09-28 23:38:03.121534] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:14.988 [2024-09-28 23:38:03.124439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.857 ms, result 0 00:17:14.988 [2024-09-28 23:38:03.125093] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:14.988 [2024-09-28 23:38:03.137904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:27.003  Copying: 43/256 [MB] (43 MBps) Copying: 67/256 [MB] (23 MBps) Copying: 79/256 [MB] (12 MBps) Copying: 103/256 [MB] (23 MBps) Copying: 122/256 [MB] (19 MBps) Copying: 141/256 [MB] (18 MBps) Copying: 164/256 [MB] (23 MBps) Copying: 188/256 [MB] (24 MBps) Copying: 212/256 [MB] (24 MBps) Copying: 231/256 [MB] (18 MBps) Copying: 246/256 [MB] (15 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-09-28 23:38:15.124932] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:27.003 [2024-09-28 23:38:15.136218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.003 [2024-09-28 23:38:15.136258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:27.003 [2024-09-28 23:38:15.136271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.003 [2024-09-28 23:38:15.136278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.003 [2024-09-28 23:38:15.136300] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:27.003 [2024-09-28 23:38:15.139440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.003 [2024-09-28 23:38:15.139471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:27.003 [2024-09-28 23:38:15.139481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:17:27.003 [2024-09-28 23:38:15.139489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.003 [2024-09-28 23:38:15.139762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.003 [2024-09-28 23:38:15.139775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:27.003 [2024-09-28 23:38:15.139782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:17:27.003 [2024-09-28 23:38:15.139790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.003 [2024-09-28 23:38:15.143462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.003 [2024-09-28 23:38:15.143485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:27.003 [2024-09-28 23:38:15.143495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:17:27.003 [2024-09-28 23:38:15.143503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.003 [2024-09-28 23:38:15.150453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.003 [2024-09-28 23:38:15.150481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:27.003 [2024-09-28 23:38:15.150494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.913 ms 00:17:27.003 [2024-09-28 23:38:15.150501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.262 [2024-09-28 23:38:15.175109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.262 [2024-09-28 23:38:15.175145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:27.262 [2024-09-28 23:38:15.175156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.542 ms 00:17:27.262 [2024-09-28 23:38:15.175163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.262 [2024-09-28 23:38:15.189042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.262 [2024-09-28 23:38:15.189077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:27.262 [2024-09-28 23:38:15.189088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.843 ms 00:17:27.262 [2024-09-28 23:38:15.189095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.262 [2024-09-28 23:38:15.189217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.262 [2024-09-28 23:38:15.189227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:27.262 [2024-09-28 23:38:15.189236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:27.262 [2024-09-28 23:38:15.189243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.262 [2024-09-28 23:38:15.212746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.262 [2024-09-28 23:38:15.212779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:27.262 [2024-09-28 23:38:15.212789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.484 ms 00:17:27.262 [2024-09-28 23:38:15.212796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.262 [2024-09-28 23:38:15.235901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.262 [2024-09-28 23:38:15.235934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:27.262 [2024-09-28 23:38:15.235943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.070 ms 00:17:27.262 [2024-09-28 23:38:15.235950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.262 [2024-09-28 23:38:15.258445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.262 [2024-09-28 23:38:15.258479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:27.262 [2024-09-28 23:38:15.258489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.461 ms 00:17:27.262 [2024-09-28 23:38:15.258496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.262 [2024-09-28 23:38:15.281382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.262 [2024-09-28 23:38:15.281415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:27.262 [2024-09-28 23:38:15.281425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.817 ms 00:17:27.262 [2024-09-28 23:38:15.281432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.262 [2024-09-28 23:38:15.281465] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:27.262 [2024-09-28 23:38:15.281479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:27.262 [2024-09-28 23:38:15.281825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.281996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:27.263 [2024-09-28 23:38:15.282237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:27.263 [2024-09-28 23:38:15.282244] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 654be1da-fb74-4145-8224-07c374972a2a 00:17:27.263 [2024-09-28 23:38:15.282252] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:27.263 [2024-09-28 23:38:15.282259] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:27.263 [2024-09-28 23:38:15.282266] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:27.263 [2024-09-28 23:38:15.282276] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:27.263 [2024-09-28 23:38:15.282282] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:27.263 [2024-09-28 23:38:15.282290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:27.263 [2024-09-28 23:38:15.282297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:27.263 [2024-09-28 23:38:15.282303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:27.263 [2024-09-28 23:38:15.282309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:27.263 [2024-09-28 23:38:15.282316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.263 [2024-09-28 23:38:15.282323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:27.263 [2024-09-28 23:38:15.282331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:17:27.263 [2024-09-28 23:38:15.282338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.263 [2024-09-28 23:38:15.294648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.263 [2024-09-28 23:38:15.294679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:27.263 [2024-09-28 23:38:15.294689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.268 ms 00:17:27.263 [2024-09-28 23:38:15.294698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.263 [2024-09-28 23:38:15.295047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.263 [2024-09-28 23:38:15.295057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:27.263 [2024-09-28 23:38:15.295065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:17:27.263 [2024-09-28 23:38:15.295072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.263 [2024-09-28 23:38:15.325695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.263 [2024-09-28 23:38:15.325730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.263 [2024-09-28 23:38:15.325740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.263 [2024-09-28 23:38:15.325747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.263 [2024-09-28 23:38:15.325818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.263 [2024-09-28 23:38:15.325827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.263 [2024-09-28 23:38:15.325835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.263 [2024-09-28 23:38:15.325842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.263 [2024-09-28 23:38:15.325879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.263 [2024-09-28 23:38:15.325890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.263 [2024-09-28 23:38:15.325897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.263 [2024-09-28 23:38:15.325904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.263 [2024-09-28 23:38:15.325920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.263 [2024-09-28 23:38:15.325927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.263 [2024-09-28 23:38:15.325934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.263 [2024-09-28 23:38:15.325941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.263 [2024-09-28 23:38:15.401424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.263 [2024-09-28 23:38:15.401468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.263 [2024-09-28 23:38:15.401477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.263 [2024-09-28 23:38:15.401485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.521 [2024-09-28 23:38:15.463078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.521 [2024-09-28 23:38:15.463118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.521 [2024-09-28 23:38:15.463128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.521 [2024-09-28 23:38:15.463136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.521 [2024-09-28 23:38:15.463186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.521 [2024-09-28 23:38:15.463196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.521 [2024-09-28 23:38:15.463208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.521 [2024-09-28 23:38:15.463215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.521 [2024-09-28 23:38:15.463243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.521 [2024-09-28 23:38:15.463251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.521 [2024-09-28 23:38:15.463258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.521 [2024-09-28 23:38:15.463266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.521 [2024-09-28 23:38:15.463347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.521 [2024-09-28 23:38:15.463357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.521 [2024-09-28 23:38:15.463365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.521 [2024-09-28 23:38:15.463374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.521 [2024-09-28 23:38:15.463406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.521 [2024-09-28 23:38:15.463415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:27.521 [2024-09-28 23:38:15.463423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.521 [2024-09-28 23:38:15.463430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.521 [2024-09-28 23:38:15.463465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.521 [2024-09-28 23:38:15.463473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.521 [2024-09-28 23:38:15.463481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.521 [2024-09-28 23:38:15.463491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.521 [2024-09-28 23:38:15.463550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.521 [2024-09-28 23:38:15.463560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.521 [2024-09-28 23:38:15.463568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.521 [2024-09-28 23:38:15.463575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.521 [2024-09-28 23:38:15.463707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 327.481 ms, result 0 00:17:28.086 00:17:28.086 00:17:28.344 23:38:16 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:28.911 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:28.911 23:38:16 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:28.911 23:38:16 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:28.911 23:38:16 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:28.911 23:38:16 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:28.911 23:38:16 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:28.911 23:38:16 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:28.911 23:38:16 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74453 00:17:28.911 23:38:16 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74453 ']' 00:17:28.911 23:38:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74453 00:17:28.911 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74453) - No such process 00:17:28.911 Process with pid 74453 is not found 00:17:28.911 23:38:16 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74453 is not found' 00:17:28.911 ************************************ 00:17:28.911 END TEST ftl_trim 00:17:28.911 ************************************ 00:17:28.911 00:17:28.911 real 0m55.807s 00:17:28.911 user 1m21.684s 00:17:28.911 sys 0m4.649s 00:17:28.911 23:38:16 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.911 23:38:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:28.911 23:38:16 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:28.911 23:38:16 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:28.911 23:38:16 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.911 23:38:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:28.911 ************************************ 00:17:28.911 START TEST ftl_restore 00:17:28.911 ************************************ 00:17:28.911 23:38:16 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:28.911 * Looking for test storage... 00:17:28.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:28.911 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:28.911 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:28.911 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.170 23:38:17 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.170 --rc genhtml_branch_coverage=1 00:17:29.170 --rc genhtml_function_coverage=1 00:17:29.170 --rc genhtml_legend=1 00:17:29.170 --rc geninfo_all_blocks=1 00:17:29.170 --rc geninfo_unexecuted_blocks=1 00:17:29.170 00:17:29.170 ' 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.170 --rc genhtml_branch_coverage=1 00:17:29.170 --rc genhtml_function_coverage=1 00:17:29.170 --rc genhtml_legend=1 00:17:29.170 --rc geninfo_all_blocks=1 00:17:29.170 --rc geninfo_unexecuted_blocks=1 00:17:29.170 00:17:29.170 ' 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.170 --rc genhtml_branch_coverage=1 00:17:29.170 --rc genhtml_function_coverage=1 00:17:29.170 --rc genhtml_legend=1 00:17:29.170 --rc geninfo_all_blocks=1 00:17:29.170 --rc geninfo_unexecuted_blocks=1 00:17:29.170 00:17:29.170 ' 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:29.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.170 --rc genhtml_branch_coverage=1 00:17:29.170 --rc genhtml_function_coverage=1 00:17:29.170 --rc genhtml_legend=1 00:17:29.170 --rc geninfo_all_blocks=1 00:17:29.170 --rc geninfo_unexecuted_blocks=1 00:17:29.170 00:17:29.170 ' 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.txVGOMrLZF 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74726 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74726 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74726 ']' 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.170 23:38:17 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:29.170 23:38:17 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.170 [2024-09-28 23:38:17.192070] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:29.170 [2024-09-28 23:38:17.192190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74726 ] 00:17:29.429 [2024-09-28 23:38:17.338753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.429 [2024-09-28 23:38:17.517414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.995 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.995 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:29.995 23:38:18 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:29.995 23:38:18 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:29.995 23:38:18 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:29.995 23:38:18 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:29.995 23:38:18 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:29.995 23:38:18 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:30.253 23:38:18 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:30.253 23:38:18 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:30.253 23:38:18 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:30.253 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:30.253 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:30.253 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:30.253 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:30.253 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:30.512 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:30.512 { 00:17:30.512 "name": "nvme0n1", 00:17:30.512 "aliases": [ 00:17:30.512 "232f56fc-4190-490f-9fab-5ebb184ff95d" 00:17:30.512 ], 00:17:30.512 "product_name": "NVMe disk", 00:17:30.512 "block_size": 4096, 00:17:30.512 "num_blocks": 1310720, 00:17:30.512 "uuid": "232f56fc-4190-490f-9fab-5ebb184ff95d", 00:17:30.512 "numa_id": -1, 00:17:30.512 "assigned_rate_limits": { 00:17:30.512 "rw_ios_per_sec": 0, 00:17:30.512 "rw_mbytes_per_sec": 0, 00:17:30.512 "r_mbytes_per_sec": 0, 00:17:30.512 "w_mbytes_per_sec": 0 00:17:30.512 }, 00:17:30.512 "claimed": true, 00:17:30.512 "claim_type": "read_many_write_one", 00:17:30.512 "zoned": false, 00:17:30.512 "supported_io_types": { 00:17:30.512 "read": true, 00:17:30.512 "write": true, 00:17:30.512 "unmap": true, 00:17:30.512 "flush": true, 00:17:30.512 "reset": true, 00:17:30.512 "nvme_admin": true, 00:17:30.512 "nvme_io": true, 00:17:30.512 "nvme_io_md": false, 00:17:30.512 "write_zeroes": true, 00:17:30.512 "zcopy": false, 00:17:30.512 "get_zone_info": false, 00:17:30.512 "zone_management": false, 00:17:30.512 "zone_append": false, 00:17:30.512 "compare": true, 00:17:30.512 "compare_and_write": false, 00:17:30.512 "abort": true, 00:17:30.512 "seek_hole": false, 00:17:30.512 "seek_data": false, 00:17:30.512 "copy": true, 00:17:30.512 "nvme_iov_md": false 00:17:30.512 }, 00:17:30.512 "driver_specific": { 00:17:30.512 "nvme": [ 00:17:30.512 { 00:17:30.512 "pci_address": "0000:00:11.0", 00:17:30.512 "trid": { 00:17:30.512 "trtype": "PCIe", 00:17:30.512 "traddr": "0000:00:11.0" 00:17:30.512 }, 00:17:30.512 "ctrlr_data": { 00:17:30.512 "cntlid": 0, 00:17:30.512 "vendor_id": "0x1b36", 00:17:30.512 "model_number": "QEMU NVMe Ctrl", 00:17:30.512 "serial_number": "12341", 00:17:30.512 "firmware_revision": "8.0.0", 00:17:30.512 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:30.512 "oacs": { 00:17:30.512 "security": 0, 00:17:30.512 "format": 1, 00:17:30.512 "firmware": 0, 00:17:30.512 "ns_manage": 1 00:17:30.512 }, 00:17:30.512 "multi_ctrlr": false, 00:17:30.512 "ana_reporting": false 00:17:30.512 }, 00:17:30.512 "vs": { 00:17:30.512 "nvme_version": "1.4" 00:17:30.512 }, 00:17:30.512 "ns_data": { 00:17:30.512 "id": 1, 00:17:30.512 "can_share": false 00:17:30.512 } 00:17:30.512 } 00:17:30.512 ], 00:17:30.512 "mp_policy": "active_passive" 00:17:30.512 } 00:17:30.512 } 00:17:30.512 ]' 00:17:30.512 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:30.512 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:30.512 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:30.512 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:30.512 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:30.512 23:38:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:30.512 23:38:18 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:30.512 23:38:18 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:30.512 23:38:18 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:30.512 23:38:18 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:30.512 23:38:18 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:30.770 23:38:18 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=b000bb5f-0b3c-41d8-980f-4d958a3a2bbf 00:17:30.770 23:38:18 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:30.770 23:38:18 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b000bb5f-0b3c-41d8-980f-4d958a3a2bbf 00:17:31.029 23:38:18 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:31.029 23:38:19 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=aab713cf-273a-4575-96ad-82535596ed16 00:17:31.029 23:38:19 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u aab713cf-273a-4575-96ad-82535596ed16 00:17:31.291 23:38:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=9010f006-db8c-43dc-beff-1b9fc9181061 00:17:31.291 23:38:19 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:31.291 23:38:19 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9010f006-db8c-43dc-beff-1b9fc9181061 00:17:31.291 23:38:19 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:31.291 23:38:19 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:31.291 23:38:19 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=9010f006-db8c-43dc-beff-1b9fc9181061 00:17:31.291 23:38:19 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:31.291 23:38:19 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 9010f006-db8c-43dc-beff-1b9fc9181061 00:17:31.291 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=9010f006-db8c-43dc-beff-1b9fc9181061 00:17:31.291 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:31.291 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:31.291 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:31.291 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9010f006-db8c-43dc-beff-1b9fc9181061 00:17:31.548 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:31.548 { 00:17:31.548 "name": "9010f006-db8c-43dc-beff-1b9fc9181061", 00:17:31.548 "aliases": [ 00:17:31.548 "lvs/nvme0n1p0" 00:17:31.548 ], 00:17:31.548 "product_name": "Logical Volume", 00:17:31.548 "block_size": 4096, 00:17:31.548 "num_blocks": 26476544, 00:17:31.548 "uuid": "9010f006-db8c-43dc-beff-1b9fc9181061", 00:17:31.548 "assigned_rate_limits": { 00:17:31.548 "rw_ios_per_sec": 0, 00:17:31.548 "rw_mbytes_per_sec": 0, 00:17:31.548 "r_mbytes_per_sec": 0, 00:17:31.548 "w_mbytes_per_sec": 0 00:17:31.548 }, 00:17:31.548 "claimed": false, 00:17:31.548 "zoned": false, 00:17:31.548 "supported_io_types": { 00:17:31.548 "read": true, 00:17:31.548 "write": true, 00:17:31.548 "unmap": true, 00:17:31.548 "flush": false, 00:17:31.548 "reset": true, 00:17:31.548 "nvme_admin": false, 00:17:31.548 "nvme_io": false, 00:17:31.548 "nvme_io_md": false, 00:17:31.548 "write_zeroes": true, 00:17:31.548 "zcopy": false, 00:17:31.548 "get_zone_info": false, 00:17:31.548 "zone_management": false, 00:17:31.548 "zone_append": false, 00:17:31.548 "compare": false, 00:17:31.548 "compare_and_write": false, 00:17:31.548 "abort": false, 00:17:31.548 "seek_hole": true, 00:17:31.548 "seek_data": true, 00:17:31.548 "copy": false, 00:17:31.548 "nvme_iov_md": false 00:17:31.548 }, 00:17:31.549 "driver_specific": { 00:17:31.549 "lvol": { 00:17:31.549 "lvol_store_uuid": "aab713cf-273a-4575-96ad-82535596ed16", 00:17:31.549 "base_bdev": "nvme0n1", 00:17:31.549 "thin_provision": true, 00:17:31.549 "num_allocated_clusters": 0, 00:17:31.549 "snapshot": false, 00:17:31.549 "clone": false, 00:17:31.549 "esnap_clone": false 00:17:31.549 } 00:17:31.549 } 00:17:31.549 } 00:17:31.549 ]' 00:17:31.549 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:31.549 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:31.549 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:31.549 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:31.549 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:31.549 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:31.549 23:38:19 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:31.549 23:38:19 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:31.549 23:38:19 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:31.807 23:38:19 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:31.807 23:38:19 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:31.807 23:38:19 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 9010f006-db8c-43dc-beff-1b9fc9181061 00:17:31.807 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=9010f006-db8c-43dc-beff-1b9fc9181061 00:17:31.807 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:31.807 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:31.807 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:31.807 23:38:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9010f006-db8c-43dc-beff-1b9fc9181061 00:17:32.065 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:32.065 { 00:17:32.065 "name": "9010f006-db8c-43dc-beff-1b9fc9181061", 00:17:32.065 "aliases": [ 00:17:32.065 "lvs/nvme0n1p0" 00:17:32.065 ], 00:17:32.065 "product_name": "Logical Volume", 00:17:32.065 "block_size": 4096, 00:17:32.065 "num_blocks": 26476544, 00:17:32.065 "uuid": "9010f006-db8c-43dc-beff-1b9fc9181061", 00:17:32.065 "assigned_rate_limits": { 00:17:32.065 "rw_ios_per_sec": 0, 00:17:32.065 "rw_mbytes_per_sec": 0, 00:17:32.065 "r_mbytes_per_sec": 0, 00:17:32.065 "w_mbytes_per_sec": 0 00:17:32.065 }, 00:17:32.065 "claimed": false, 00:17:32.065 "zoned": false, 00:17:32.065 "supported_io_types": { 00:17:32.065 "read": true, 00:17:32.065 "write": true, 00:17:32.065 "unmap": true, 00:17:32.065 "flush": false, 00:17:32.065 "reset": true, 00:17:32.065 "nvme_admin": false, 00:17:32.065 "nvme_io": false, 00:17:32.065 "nvme_io_md": false, 00:17:32.065 "write_zeroes": true, 00:17:32.065 "zcopy": false, 00:17:32.065 "get_zone_info": false, 00:17:32.065 "zone_management": false, 00:17:32.065 "zone_append": false, 00:17:32.065 "compare": false, 00:17:32.065 "compare_and_write": false, 00:17:32.065 "abort": false, 00:17:32.065 "seek_hole": true, 00:17:32.065 "seek_data": true, 00:17:32.065 "copy": false, 00:17:32.065 "nvme_iov_md": false 00:17:32.065 }, 00:17:32.065 "driver_specific": { 00:17:32.065 "lvol": { 00:17:32.065 "lvol_store_uuid": "aab713cf-273a-4575-96ad-82535596ed16", 00:17:32.065 "base_bdev": "nvme0n1", 00:17:32.065 "thin_provision": true, 00:17:32.065 "num_allocated_clusters": 0, 00:17:32.065 "snapshot": false, 00:17:32.065 "clone": false, 00:17:32.065 "esnap_clone": false 00:17:32.065 } 00:17:32.065 } 00:17:32.065 } 00:17:32.065 ]' 00:17:32.065 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:32.065 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:32.065 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:32.065 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:32.065 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:32.065 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:32.065 23:38:20 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:32.065 23:38:20 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:32.323 23:38:20 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:32.323 23:38:20 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 9010f006-db8c-43dc-beff-1b9fc9181061 00:17:32.323 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=9010f006-db8c-43dc-beff-1b9fc9181061 00:17:32.323 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:32.323 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:32.323 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:32.323 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9010f006-db8c-43dc-beff-1b9fc9181061 00:17:32.582 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:32.582 { 00:17:32.582 "name": "9010f006-db8c-43dc-beff-1b9fc9181061", 00:17:32.582 "aliases": [ 00:17:32.582 "lvs/nvme0n1p0" 00:17:32.582 ], 00:17:32.582 "product_name": "Logical Volume", 00:17:32.582 "block_size": 4096, 00:17:32.582 "num_blocks": 26476544, 00:17:32.582 "uuid": "9010f006-db8c-43dc-beff-1b9fc9181061", 00:17:32.582 "assigned_rate_limits": { 00:17:32.582 "rw_ios_per_sec": 0, 00:17:32.582 "rw_mbytes_per_sec": 0, 00:17:32.582 "r_mbytes_per_sec": 0, 00:17:32.582 "w_mbytes_per_sec": 0 00:17:32.582 }, 00:17:32.582 "claimed": false, 00:17:32.582 "zoned": false, 00:17:32.582 "supported_io_types": { 00:17:32.582 "read": true, 00:17:32.582 "write": true, 00:17:32.582 "unmap": true, 00:17:32.582 "flush": false, 00:17:32.582 "reset": true, 00:17:32.582 "nvme_admin": false, 00:17:32.582 "nvme_io": false, 00:17:32.582 "nvme_io_md": false, 00:17:32.582 "write_zeroes": true, 00:17:32.582 "zcopy": false, 00:17:32.582 "get_zone_info": false, 00:17:32.582 "zone_management": false, 00:17:32.582 "zone_append": false, 00:17:32.582 "compare": false, 00:17:32.582 "compare_and_write": false, 00:17:32.582 "abort": false, 00:17:32.582 "seek_hole": true, 00:17:32.582 "seek_data": true, 00:17:32.582 "copy": false, 00:17:32.582 "nvme_iov_md": false 00:17:32.582 }, 00:17:32.582 "driver_specific": { 00:17:32.582 "lvol": { 00:17:32.582 "lvol_store_uuid": "aab713cf-273a-4575-96ad-82535596ed16", 00:17:32.582 "base_bdev": "nvme0n1", 00:17:32.582 "thin_provision": true, 00:17:32.582 "num_allocated_clusters": 0, 00:17:32.582 "snapshot": false, 00:17:32.582 "clone": false, 00:17:32.582 "esnap_clone": false 00:17:32.582 } 00:17:32.582 } 00:17:32.582 } 00:17:32.582 ]' 00:17:32.582 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:32.582 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:32.582 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:32.582 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:32.582 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:32.582 23:38:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:32.582 23:38:20 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:32.582 23:38:20 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9010f006-db8c-43dc-beff-1b9fc9181061 --l2p_dram_limit 10' 00:17:32.582 23:38:20 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:32.582 23:38:20 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:32.582 23:38:20 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:32.582 23:38:20 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:32.582 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:32.582 23:38:20 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9010f006-db8c-43dc-beff-1b9fc9181061 --l2p_dram_limit 10 -c nvc0n1p0 00:17:32.582 [2024-09-28 23:38:20.744031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.582 [2024-09-28 23:38:20.744080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:32.582 [2024-09-28 23:38:20.744093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.582 [2024-09-28 23:38:20.744100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.582 [2024-09-28 23:38:20.744144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.582 [2024-09-28 23:38:20.744151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.582 [2024-09-28 23:38:20.744160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:32.582 [2024-09-28 23:38:20.744179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.582 [2024-09-28 23:38:20.744201] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:32.582 [2024-09-28 23:38:20.744793] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:32.582 [2024-09-28 23:38:20.744815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.582 [2024-09-28 23:38:20.744821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.582 [2024-09-28 23:38:20.744829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:17:32.582 [2024-09-28 23:38:20.744836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.582 [2024-09-28 23:38:20.744892] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a7631990-80f8-4fde-90d3-f206050c63ad 00:17:32.582 [2024-09-28 23:38:20.745864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.582 [2024-09-28 23:38:20.745894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:32.582 [2024-09-28 23:38:20.745902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:32.582 [2024-09-28 23:38:20.745910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.841 [2024-09-28 23:38:20.750660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.841 [2024-09-28 23:38:20.750691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.841 [2024-09-28 23:38:20.750699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:17:32.841 [2024-09-28 23:38:20.750706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.841 [2024-09-28 23:38:20.750776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.841 [2024-09-28 23:38:20.750784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.841 [2024-09-28 23:38:20.750791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:32.841 [2024-09-28 23:38:20.750803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.841 [2024-09-28 23:38:20.750839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.841 [2024-09-28 23:38:20.750848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:32.841 [2024-09-28 23:38:20.750854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:32.841 [2024-09-28 23:38:20.750861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.841 [2024-09-28 23:38:20.750878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.841 [2024-09-28 23:38:20.753762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.841 [2024-09-28 23:38:20.753789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.841 [2024-09-28 23:38:20.753798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.886 ms 00:17:32.841 [2024-09-28 23:38:20.753804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.841 [2024-09-28 23:38:20.753833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.841 [2024-09-28 23:38:20.753839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:32.841 [2024-09-28 23:38:20.753847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:32.841 [2024-09-28 23:38:20.753854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.841 [2024-09-28 23:38:20.753874] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:32.841 [2024-09-28 23:38:20.753978] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:32.841 [2024-09-28 23:38:20.753991] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:32.841 [2024-09-28 23:38:20.753999] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:32.841 [2024-09-28 23:38:20.754010] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:32.841 [2024-09-28 23:38:20.754017] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:32.841 [2024-09-28 23:38:20.754025] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:32.841 [2024-09-28 23:38:20.754031] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:32.841 [2024-09-28 23:38:20.754037] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:32.841 [2024-09-28 23:38:20.754043] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:32.841 [2024-09-28 23:38:20.754051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.841 [2024-09-28 23:38:20.754062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:32.841 [2024-09-28 23:38:20.754069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:17:32.841 [2024-09-28 23:38:20.754074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.841 [2024-09-28 23:38:20.754138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.841 [2024-09-28 23:38:20.754147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:32.841 [2024-09-28 23:38:20.754154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:32.841 [2024-09-28 23:38:20.754160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.841 [2024-09-28 23:38:20.754234] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:32.841 [2024-09-28 23:38:20.754242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:32.841 [2024-09-28 23:38:20.754249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.842 [2024-09-28 23:38:20.754255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:32.842 [2024-09-28 23:38:20.754268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:32.842 [2024-09-28 23:38:20.754280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:32.842 [2024-09-28 23:38:20.754286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.842 [2024-09-28 23:38:20.754298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:32.842 [2024-09-28 23:38:20.754303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:32.842 [2024-09-28 23:38:20.754309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.842 [2024-09-28 23:38:20.754314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:32.842 [2024-09-28 23:38:20.754321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:32.842 [2024-09-28 23:38:20.754327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:32.842 [2024-09-28 23:38:20.754340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:32.842 [2024-09-28 23:38:20.754347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:32.842 [2024-09-28 23:38:20.754367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.842 [2024-09-28 23:38:20.754380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:32.842 [2024-09-28 23:38:20.754385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.842 [2024-09-28 23:38:20.754397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:32.842 [2024-09-28 23:38:20.754404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.842 [2024-09-28 23:38:20.754415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:32.842 [2024-09-28 23:38:20.754420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.842 [2024-09-28 23:38:20.754431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:32.842 [2024-09-28 23:38:20.754439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.842 [2024-09-28 23:38:20.754450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:32.842 [2024-09-28 23:38:20.754455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:32.842 [2024-09-28 23:38:20.754461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.842 [2024-09-28 23:38:20.754466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:32.842 [2024-09-28 23:38:20.754473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:32.842 [2024-09-28 23:38:20.754477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:32.842 [2024-09-28 23:38:20.754488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:32.842 [2024-09-28 23:38:20.754494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754499] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:32.842 [2024-09-28 23:38:20.754506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:32.842 [2024-09-28 23:38:20.754529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.842 [2024-09-28 23:38:20.754536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.842 [2024-09-28 23:38:20.754543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:32.842 [2024-09-28 23:38:20.754552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:32.842 [2024-09-28 23:38:20.754557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:32.842 [2024-09-28 23:38:20.754564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:32.842 [2024-09-28 23:38:20.754569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:32.842 [2024-09-28 23:38:20.754576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:32.842 [2024-09-28 23:38:20.754584] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:32.842 [2024-09-28 23:38:20.754592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.842 [2024-09-28 23:38:20.754599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:32.842 [2024-09-28 23:38:20.754606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:32.842 [2024-09-28 23:38:20.754612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:32.842 [2024-09-28 23:38:20.754619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:32.842 [2024-09-28 23:38:20.754624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:32.842 [2024-09-28 23:38:20.754631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:32.842 [2024-09-28 23:38:20.754636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:32.842 [2024-09-28 23:38:20.754642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:32.842 [2024-09-28 23:38:20.754648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:32.842 [2024-09-28 23:38:20.754656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:32.842 [2024-09-28 23:38:20.754661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:32.842 [2024-09-28 23:38:20.754668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:32.842 [2024-09-28 23:38:20.754673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:32.842 [2024-09-28 23:38:20.754680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:32.842 [2024-09-28 23:38:20.754685] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:32.842 [2024-09-28 23:38:20.754693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.842 [2024-09-28 23:38:20.754699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:32.842 [2024-09-28 23:38:20.754706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:32.842 [2024-09-28 23:38:20.754711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:32.842 [2024-09-28 23:38:20.754718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:32.842 [2024-09-28 23:38:20.754724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.842 [2024-09-28 23:38:20.754731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:32.842 [2024-09-28 23:38:20.754737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:17:32.842 [2024-09-28 23:38:20.754744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.842 [2024-09-28 23:38:20.754788] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:32.842 [2024-09-28 23:38:20.754799] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:35.372 [2024-09-28 23:38:23.228101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.372 [2024-09-28 23:38:23.228165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:35.372 [2024-09-28 23:38:23.228179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2473.303 ms 00:17:35.372 [2024-09-28 23:38:23.228190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.372 [2024-09-28 23:38:23.253439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.372 [2024-09-28 23:38:23.253487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.372 [2024-09-28 23:38:23.253499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.915 ms 00:17:35.372 [2024-09-28 23:38:23.253530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.372 [2024-09-28 23:38:23.253658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.372 [2024-09-28 23:38:23.253670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:35.372 [2024-09-28 23:38:23.253679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:35.372 [2024-09-28 23:38:23.253692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.372 [2024-09-28 23:38:23.292372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.372 [2024-09-28 23:38:23.292432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.372 [2024-09-28 23:38:23.292454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.627 ms 00:17:35.372 [2024-09-28 23:38:23.292470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.372 [2024-09-28 23:38:23.292536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.372 [2024-09-28 23:38:23.292566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.372 [2024-09-28 23:38:23.292580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.372 [2024-09-28 23:38:23.292601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.372 [2024-09-28 23:38:23.293015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.372 [2024-09-28 23:38:23.293064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.372 [2024-09-28 23:38:23.293079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:17:35.372 [2024-09-28 23:38:23.293096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.372 [2024-09-28 23:38:23.293253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.372 [2024-09-28 23:38:23.293268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.372 [2024-09-28 23:38:23.293280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:35.372 [2024-09-28 23:38:23.293297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.372 [2024-09-28 23:38:23.309550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.372 [2024-09-28 23:38:23.309583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.372 [2024-09-28 23:38:23.309593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.229 ms 00:17:35.372 [2024-09-28 23:38:23.309602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.373 [2024-09-28 23:38:23.320744] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:35.373 [2024-09-28 23:38:23.323309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.373 [2024-09-28 23:38:23.323340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:35.373 [2024-09-28 23:38:23.323354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.636 ms 00:17:35.373 [2024-09-28 23:38:23.323362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.373 [2024-09-28 23:38:23.390038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.373 [2024-09-28 23:38:23.390087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:35.373 [2024-09-28 23:38:23.390105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.647 ms 00:17:35.373 [2024-09-28 23:38:23.390113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.373 [2024-09-28 23:38:23.390291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.373 [2024-09-28 23:38:23.390302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:35.373 [2024-09-28 23:38:23.390314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:17:35.373 [2024-09-28 23:38:23.390321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.373 [2024-09-28 23:38:23.413693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.373 [2024-09-28 23:38:23.413732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:35.373 [2024-09-28 23:38:23.413745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.318 ms 00:17:35.373 [2024-09-28 23:38:23.413753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.373 [2024-09-28 23:38:23.435824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.373 [2024-09-28 23:38:23.435857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:35.373 [2024-09-28 23:38:23.435870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.032 ms 00:17:35.373 [2024-09-28 23:38:23.435878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.373 [2024-09-28 23:38:23.436443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.373 [2024-09-28 23:38:23.436464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:35.373 [2024-09-28 23:38:23.436475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:17:35.373 [2024-09-28 23:38:23.436483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.373 [2024-09-28 23:38:23.504698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.373 [2024-09-28 23:38:23.504735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:35.373 [2024-09-28 23:38:23.504751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.169 ms 00:17:35.373 [2024-09-28 23:38:23.504762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.373 [2024-09-28 23:38:23.528482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.373 [2024-09-28 23:38:23.528526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:35.373 [2024-09-28 23:38:23.528540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.656 ms 00:17:35.373 [2024-09-28 23:38:23.528548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.632 [2024-09-28 23:38:23.551162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.632 [2024-09-28 23:38:23.551196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:35.632 [2024-09-28 23:38:23.551209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.587 ms 00:17:35.632 [2024-09-28 23:38:23.551216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.632 [2024-09-28 23:38:23.574241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.632 [2024-09-28 23:38:23.574276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:35.632 [2024-09-28 23:38:23.574289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.000 ms 00:17:35.632 [2024-09-28 23:38:23.574296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.632 [2024-09-28 23:38:23.574322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.632 [2024-09-28 23:38:23.574330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:35.632 [2024-09-28 23:38:23.574344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:35.632 [2024-09-28 23:38:23.574352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.632 [2024-09-28 23:38:23.574437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.632 [2024-09-28 23:38:23.574447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:35.632 [2024-09-28 23:38:23.574456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:35.632 [2024-09-28 23:38:23.574464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.632 [2024-09-28 23:38:23.575298] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2830.846 ms, result 0 00:17:35.632 { 00:17:35.632 "name": "ftl0", 00:17:35.632 "uuid": "a7631990-80f8-4fde-90d3-f206050c63ad" 00:17:35.632 } 00:17:35.632 23:38:23 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:35.632 23:38:23 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:35.891 23:38:23 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:35.891 23:38:23 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:35.891 [2024-09-28 23:38:23.990971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.891 [2024-09-28 23:38:23.991024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.891 [2024-09-28 23:38:23.991037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.891 [2024-09-28 23:38:23.991047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.891 [2024-09-28 23:38:23.991070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.891 [2024-09-28 23:38:23.993659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.891 [2024-09-28 23:38:23.993689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.891 [2024-09-28 23:38:23.993708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:17:35.891 [2024-09-28 23:38:23.993716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.891 [2024-09-28 23:38:23.993974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.891 [2024-09-28 23:38:23.993991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.891 [2024-09-28 23:38:23.994001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:17:35.891 [2024-09-28 23:38:23.994009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.891 [2024-09-28 23:38:23.997236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.891 [2024-09-28 23:38:23.997258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.891 [2024-09-28 23:38:23.997269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:17:35.891 [2024-09-28 23:38:23.997280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.891 [2024-09-28 23:38:24.003513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.891 [2024-09-28 23:38:24.003543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:35.891 [2024-09-28 23:38:24.003555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.209 ms 00:17:35.891 [2024-09-28 23:38:24.003563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.891 [2024-09-28 23:38:24.027406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.891 [2024-09-28 23:38:24.027439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.891 [2024-09-28 23:38:24.027452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.775 ms 00:17:35.891 [2024-09-28 23:38:24.027460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.891 [2024-09-28 23:38:24.041817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.891 [2024-09-28 23:38:24.041854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.891 [2024-09-28 23:38:24.041867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.318 ms 00:17:35.891 [2024-09-28 23:38:24.041875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.891 [2024-09-28 23:38:24.042022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.891 [2024-09-28 23:38:24.042035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.891 [2024-09-28 23:38:24.042045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:35.891 [2024-09-28 23:38:24.042052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.151 [2024-09-28 23:38:24.064810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.151 [2024-09-28 23:38:24.064843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:36.151 [2024-09-28 23:38:24.064855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.739 ms 00:17:36.151 [2024-09-28 23:38:24.064862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.151 [2024-09-28 23:38:24.087255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.151 [2024-09-28 23:38:24.087287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:36.151 [2024-09-28 23:38:24.087299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.356 ms 00:17:36.151 [2024-09-28 23:38:24.087307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.151 [2024-09-28 23:38:24.109359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.151 [2024-09-28 23:38:24.109392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:36.151 [2024-09-28 23:38:24.109403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.015 ms 00:17:36.151 [2024-09-28 23:38:24.109411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.151 [2024-09-28 23:38:24.131700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.151 [2024-09-28 23:38:24.131731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:36.151 [2024-09-28 23:38:24.131743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.218 ms 00:17:36.151 [2024-09-28 23:38:24.131750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.151 [2024-09-28 23:38:24.131784] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:36.151 [2024-09-28 23:38:24.131798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.131997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:36.151 [2024-09-28 23:38:24.132300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:36.152 [2024-09-28 23:38:24.132666] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:36.152 [2024-09-28 23:38:24.132677] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7631990-80f8-4fde-90d3-f206050c63ad 00:17:36.152 [2024-09-28 23:38:24.132685] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:36.152 [2024-09-28 23:38:24.132694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:36.152 [2024-09-28 23:38:24.132702] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:36.152 [2024-09-28 23:38:24.132710] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:36.152 [2024-09-28 23:38:24.132717] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:36.152 [2024-09-28 23:38:24.132726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:36.152 [2024-09-28 23:38:24.132735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:36.152 [2024-09-28 23:38:24.132743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:36.152 [2024-09-28 23:38:24.132750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:36.152 [2024-09-28 23:38:24.132759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.152 [2024-09-28 23:38:24.132766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:36.152 [2024-09-28 23:38:24.132776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:17:36.152 [2024-09-28 23:38:24.132783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.152 [2024-09-28 23:38:24.145075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.152 [2024-09-28 23:38:24.145105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:36.152 [2024-09-28 23:38:24.145117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.261 ms 00:17:36.152 [2024-09-28 23:38:24.145124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.152 [2024-09-28 23:38:24.145477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.152 [2024-09-28 23:38:24.145497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:36.152 [2024-09-28 23:38:24.145522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:17:36.152 [2024-09-28 23:38:24.145530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.152 [2024-09-28 23:38:24.182297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.152 [2024-09-28 23:38:24.182333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:36.152 [2024-09-28 23:38:24.182344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.152 [2024-09-28 23:38:24.182354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.152 [2024-09-28 23:38:24.182419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.152 [2024-09-28 23:38:24.182428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:36.152 [2024-09-28 23:38:24.182437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.152 [2024-09-28 23:38:24.182444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.152 [2024-09-28 23:38:24.182543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.152 [2024-09-28 23:38:24.182554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:36.152 [2024-09-28 23:38:24.182563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.152 [2024-09-28 23:38:24.182570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.152 [2024-09-28 23:38:24.182593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.152 [2024-09-28 23:38:24.182600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:36.152 [2024-09-28 23:38:24.182609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.152 [2024-09-28 23:38:24.182616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.152 [2024-09-28 23:38:24.259597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.152 [2024-09-28 23:38:24.259645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:36.152 [2024-09-28 23:38:24.259657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.152 [2024-09-28 23:38:24.259665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.412 [2024-09-28 23:38:24.322537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.412 [2024-09-28 23:38:24.322581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:36.412 [2024-09-28 23:38:24.322593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.412 [2024-09-28 23:38:24.322602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.412 [2024-09-28 23:38:24.322676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.412 [2024-09-28 23:38:24.322685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:36.412 [2024-09-28 23:38:24.322695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.412 [2024-09-28 23:38:24.322702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.412 [2024-09-28 23:38:24.322763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.412 [2024-09-28 23:38:24.322774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:36.412 [2024-09-28 23:38:24.322783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.412 [2024-09-28 23:38:24.322791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.412 [2024-09-28 23:38:24.322879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.412 [2024-09-28 23:38:24.322889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:36.412 [2024-09-28 23:38:24.322898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.412 [2024-09-28 23:38:24.322905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.412 [2024-09-28 23:38:24.322935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.412 [2024-09-28 23:38:24.322945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:36.412 [2024-09-28 23:38:24.322956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.412 [2024-09-28 23:38:24.322963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.412 [2024-09-28 23:38:24.322998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.412 [2024-09-28 23:38:24.323007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:36.412 [2024-09-28 23:38:24.323016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.412 [2024-09-28 23:38:24.323023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.412 [2024-09-28 23:38:24.323066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.412 [2024-09-28 23:38:24.323077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:36.412 [2024-09-28 23:38:24.323086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.412 [2024-09-28 23:38:24.323093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.412 [2024-09-28 23:38:24.323211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.212 ms, result 0 00:17:36.412 true 00:17:36.412 23:38:24 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74726 00:17:36.412 23:38:24 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74726 ']' 00:17:36.412 23:38:24 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74726 00:17:36.412 23:38:24 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:36.412 23:38:24 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:36.412 23:38:24 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74726 00:17:36.412 23:38:24 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:36.412 23:38:24 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:36.412 killing process with pid 74726 00:17:36.412 23:38:24 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74726' 00:17:36.412 23:38:24 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74726 00:17:36.412 23:38:24 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74726 00:17:43.010 23:38:30 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:47.189 262144+0 records in 00:17:47.189 262144+0 records out 00:17:47.189 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.79567 s, 283 MB/s 00:17:47.189 23:38:34 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:48.560 23:38:36 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:48.560 [2024-09-28 23:38:36.537424] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:48.560 [2024-09-28 23:38:36.537530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74944 ] 00:17:48.560 [2024-09-28 23:38:36.682468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.817 [2024-09-28 23:38:36.855469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.075 [2024-09-28 23:38:37.102420] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:49.075 [2024-09-28 23:38:37.102480] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:49.334 [2024-09-28 23:38:37.255785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.334 [2024-09-28 23:38:37.255829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:49.334 [2024-09-28 23:38:37.255842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:49.335 [2024-09-28 23:38:37.255853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.255894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.255904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.335 [2024-09-28 23:38:37.255912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:49.335 [2024-09-28 23:38:37.255919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.255934] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:49.335 [2024-09-28 23:38:37.256575] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:49.335 [2024-09-28 23:38:37.256597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.256605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.335 [2024-09-28 23:38:37.256613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:17:49.335 [2024-09-28 23:38:37.256620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.257604] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:49.335 [2024-09-28 23:38:37.269578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.269611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:49.335 [2024-09-28 23:38:37.269623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.975 ms 00:17:49.335 [2024-09-28 23:38:37.269630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.269678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.269687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:49.335 [2024-09-28 23:38:37.269695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:49.335 [2024-09-28 23:38:37.269702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.274166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.274195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.335 [2024-09-28 23:38:37.274204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.408 ms 00:17:49.335 [2024-09-28 23:38:37.274211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.274277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.274285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.335 [2024-09-28 23:38:37.274293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:49.335 [2024-09-28 23:38:37.274300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.274342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.274351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:49.335 [2024-09-28 23:38:37.274358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:49.335 [2024-09-28 23:38:37.274366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.274385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:49.335 [2024-09-28 23:38:37.277715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.277743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.335 [2024-09-28 23:38:37.277752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.334 ms 00:17:49.335 [2024-09-28 23:38:37.277759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.277785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.277794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:49.335 [2024-09-28 23:38:37.277801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:49.335 [2024-09-28 23:38:37.277808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.277829] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:49.335 [2024-09-28 23:38:37.277846] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:49.335 [2024-09-28 23:38:37.277879] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:49.335 [2024-09-28 23:38:37.277893] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:49.335 [2024-09-28 23:38:37.277993] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:49.335 [2024-09-28 23:38:37.278004] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:49.335 [2024-09-28 23:38:37.278015] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:49.335 [2024-09-28 23:38:37.278026] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:49.335 [2024-09-28 23:38:37.278035] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:49.335 [2024-09-28 23:38:37.278043] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:49.335 [2024-09-28 23:38:37.278051] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:49.335 [2024-09-28 23:38:37.278058] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:49.335 [2024-09-28 23:38:37.278065] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:49.335 [2024-09-28 23:38:37.278072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.278080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:49.335 [2024-09-28 23:38:37.278087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:17:49.335 [2024-09-28 23:38:37.278094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.278175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.335 [2024-09-28 23:38:37.278185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:49.335 [2024-09-28 23:38:37.278192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:49.335 [2024-09-28 23:38:37.278199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.335 [2024-09-28 23:38:37.278297] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:49.335 [2024-09-28 23:38:37.278314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:49.335 [2024-09-28 23:38:37.278322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:49.335 [2024-09-28 23:38:37.278330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:49.335 [2024-09-28 23:38:37.278344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:49.335 [2024-09-28 23:38:37.278357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:49.335 [2024-09-28 23:38:37.278364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:49.335 [2024-09-28 23:38:37.278378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:49.335 [2024-09-28 23:38:37.278384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:49.335 [2024-09-28 23:38:37.278399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:49.335 [2024-09-28 23:38:37.278410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:49.335 [2024-09-28 23:38:37.278418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:49.335 [2024-09-28 23:38:37.278425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:49.335 [2024-09-28 23:38:37.278438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:49.335 [2024-09-28 23:38:37.278444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:49.335 [2024-09-28 23:38:37.278459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.335 [2024-09-28 23:38:37.278471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:49.335 [2024-09-28 23:38:37.278478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.335 [2024-09-28 23:38:37.278491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:49.335 [2024-09-28 23:38:37.278497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.335 [2024-09-28 23:38:37.278521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:49.335 [2024-09-28 23:38:37.278528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.335 [2024-09-28 23:38:37.278541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:49.335 [2024-09-28 23:38:37.278547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:49.335 [2024-09-28 23:38:37.278560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:49.335 [2024-09-28 23:38:37.278567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:49.335 [2024-09-28 23:38:37.278573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:49.335 [2024-09-28 23:38:37.278580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:49.335 [2024-09-28 23:38:37.278586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:49.335 [2024-09-28 23:38:37.278592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:49.335 [2024-09-28 23:38:37.278606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:49.335 [2024-09-28 23:38:37.278612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.335 [2024-09-28 23:38:37.278618] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:49.336 [2024-09-28 23:38:37.278626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:49.336 [2024-09-28 23:38:37.278635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:49.336 [2024-09-28 23:38:37.278643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.336 [2024-09-28 23:38:37.278651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:49.336 [2024-09-28 23:38:37.278657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:49.336 [2024-09-28 23:38:37.278664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:49.336 [2024-09-28 23:38:37.278670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:49.336 [2024-09-28 23:38:37.278677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:49.336 [2024-09-28 23:38:37.278684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:49.336 [2024-09-28 23:38:37.278692] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:49.336 [2024-09-28 23:38:37.278700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:49.336 [2024-09-28 23:38:37.278708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:49.336 [2024-09-28 23:38:37.278715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:49.336 [2024-09-28 23:38:37.278722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:49.336 [2024-09-28 23:38:37.278729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:49.336 [2024-09-28 23:38:37.278736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:49.336 [2024-09-28 23:38:37.278743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:49.336 [2024-09-28 23:38:37.278750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:49.336 [2024-09-28 23:38:37.278757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:49.336 [2024-09-28 23:38:37.278764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:49.336 [2024-09-28 23:38:37.278771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:49.336 [2024-09-28 23:38:37.278778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:49.336 [2024-09-28 23:38:37.278785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:49.336 [2024-09-28 23:38:37.278792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:49.336 [2024-09-28 23:38:37.278799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:49.336 [2024-09-28 23:38:37.278806] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:49.336 [2024-09-28 23:38:37.278815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:49.336 [2024-09-28 23:38:37.278822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:49.336 [2024-09-28 23:38:37.278829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:49.336 [2024-09-28 23:38:37.278836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:49.336 [2024-09-28 23:38:37.278844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:49.336 [2024-09-28 23:38:37.278851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.278858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:49.336 [2024-09-28 23:38:37.278865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:17:49.336 [2024-09-28 23:38:37.278872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.312605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.312645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.336 [2024-09-28 23:38:37.312656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.678 ms 00:17:49.336 [2024-09-28 23:38:37.312664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.312753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.312762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:49.336 [2024-09-28 23:38:37.312770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:49.336 [2024-09-28 23:38:37.312777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.342643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.342676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.336 [2024-09-28 23:38:37.342688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.813 ms 00:17:49.336 [2024-09-28 23:38:37.342696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.342725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.342733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.336 [2024-09-28 23:38:37.342741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:49.336 [2024-09-28 23:38:37.342748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.343070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.343098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.336 [2024-09-28 23:38:37.343107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:49.336 [2024-09-28 23:38:37.343117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.343233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.343248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.336 [2024-09-28 23:38:37.343256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:49.336 [2024-09-28 23:38:37.343264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.355338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.355368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.336 [2024-09-28 23:38:37.355378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.056 ms 00:17:49.336 [2024-09-28 23:38:37.355385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.367665] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:49.336 [2024-09-28 23:38:37.367698] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:49.336 [2024-09-28 23:38:37.367709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.367716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:49.336 [2024-09-28 23:38:37.367724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.223 ms 00:17:49.336 [2024-09-28 23:38:37.367732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.391513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.391546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:49.336 [2024-09-28 23:38:37.391556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.741 ms 00:17:49.336 [2024-09-28 23:38:37.391564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.403227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.403258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:49.336 [2024-09-28 23:38:37.403267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.643 ms 00:17:49.336 [2024-09-28 23:38:37.403274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.414144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.414174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:49.336 [2024-09-28 23:38:37.414184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.842 ms 00:17:49.336 [2024-09-28 23:38:37.414191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.414801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.414826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:49.336 [2024-09-28 23:38:37.414835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:17:49.336 [2024-09-28 23:38:37.414842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.469159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.469206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:49.336 [2024-09-28 23:38:37.469219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.300 ms 00:17:49.336 [2024-09-28 23:38:37.469226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.479256] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:49.336 [2024-09-28 23:38:37.481348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.481376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:49.336 [2024-09-28 23:38:37.481387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.082 ms 00:17:49.336 [2024-09-28 23:38:37.481396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.481476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.481487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:49.336 [2024-09-28 23:38:37.481497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:49.336 [2024-09-28 23:38:37.481505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.336 [2024-09-28 23:38:37.481581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.336 [2024-09-28 23:38:37.481592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:49.336 [2024-09-28 23:38:37.481601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:49.337 [2024-09-28 23:38:37.481609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.337 [2024-09-28 23:38:37.481628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.337 [2024-09-28 23:38:37.481640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:49.337 [2024-09-28 23:38:37.481647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:49.337 [2024-09-28 23:38:37.481654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.337 [2024-09-28 23:38:37.481683] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:49.337 [2024-09-28 23:38:37.481692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.337 [2024-09-28 23:38:37.481700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:49.337 [2024-09-28 23:38:37.481707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:49.337 [2024-09-28 23:38:37.481717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-09-28 23:38:37.504296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-09-28 23:38:37.504329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:49.594 [2024-09-28 23:38:37.504340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.563 ms 00:17:49.594 [2024-09-28 23:38:37.504348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-09-28 23:38:37.504412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.594 [2024-09-28 23:38:37.504421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:49.594 [2024-09-28 23:38:37.504429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:49.594 [2024-09-28 23:38:37.504436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.594 [2024-09-28 23:38:37.505326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 249.148 ms, result 0 00:18:17.774  Copying: 45/1024 [MB] (45 MBps) Copying: 92/1024 [MB] (46 MBps) Copying: 133/1024 [MB] (41 MBps) Copying: 160/1024 [MB] (27 MBps) Copying: 211/1024 [MB] (50 MBps) Copying: 236/1024 [MB] (25 MBps) Copying: 259/1024 [MB] (22 MBps) Copying: 282/1024 [MB] (23 MBps) Copying: 307/1024 [MB] (25 MBps) Copying: 331/1024 [MB] (23 MBps) Copying: 360/1024 [MB] (29 MBps) Copying: 386/1024 [MB] (25 MBps) Copying: 423/1024 [MB] (37 MBps) Copying: 448/1024 [MB] (25 MBps) Copying: 474/1024 [MB] (25 MBps) Copying: 497/1024 [MB] (23 MBps) Copying: 519/1024 [MB] (21 MBps) Copying: 553/1024 [MB] (33 MBps) Copying: 600/1024 [MB] (46 MBps) Copying: 646/1024 [MB] (46 MBps) Copying: 691/1024 [MB] (45 MBps) Copying: 737/1024 [MB] (45 MBps) Copying: 781/1024 [MB] (44 MBps) Copying: 827/1024 [MB] (45 MBps) Copying: 872/1024 [MB] (45 MBps) Copying: 919/1024 [MB] (46 MBps) Copying: 964/1024 [MB] (45 MBps) Copying: 1010/1024 [MB] (45 MBps) Copying: 1024/1024 [MB] (average 36 MBps)[2024-09-28 23:39:05.817295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.774 [2024-09-28 23:39:05.817344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:17.774 [2024-09-28 23:39:05.817358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:17.774 [2024-09-28 23:39:05.817366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.774 [2024-09-28 23:39:05.817389] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:17.774 [2024-09-28 23:39:05.820039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.774 [2024-09-28 23:39:05.820080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:17.774 [2024-09-28 23:39:05.820096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:18:17.774 [2024-09-28 23:39:05.820110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.774 [2024-09-28 23:39:05.821423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.774 [2024-09-28 23:39:05.821473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:17.774 [2024-09-28 23:39:05.821488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:18:17.774 [2024-09-28 23:39:05.821500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.774 [2024-09-28 23:39:05.834123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.774 [2024-09-28 23:39:05.834167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:17.774 [2024-09-28 23:39:05.834181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.591 ms 00:18:17.774 [2024-09-28 23:39:05.834193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.774 [2024-09-28 23:39:05.840417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.774 [2024-09-28 23:39:05.840455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:17.774 [2024-09-28 23:39:05.840470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.150 ms 00:18:17.774 [2024-09-28 23:39:05.840484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.774 [2024-09-28 23:39:05.863822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.774 [2024-09-28 23:39:05.863858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:17.774 [2024-09-28 23:39:05.863872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.262 ms 00:18:17.774 [2024-09-28 23:39:05.863883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.774 [2024-09-28 23:39:05.877930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.774 [2024-09-28 23:39:05.877967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:17.774 [2024-09-28 23:39:05.877988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.965 ms 00:18:17.774 [2024-09-28 23:39:05.877999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.774 [2024-09-28 23:39:05.878150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.774 [2024-09-28 23:39:05.878173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:17.774 [2024-09-28 23:39:05.878186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:17.774 [2024-09-28 23:39:05.878197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.774 [2024-09-28 23:39:05.901180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.774 [2024-09-28 23:39:05.901215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:17.774 [2024-09-28 23:39:05.901229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.962 ms 00:18:17.774 [2024-09-28 23:39:05.901241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.774 [2024-09-28 23:39:05.923806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.774 [2024-09-28 23:39:05.923842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:17.774 [2024-09-28 23:39:05.923856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.483 ms 00:18:17.774 [2024-09-28 23:39:05.923867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.034 [2024-09-28 23:39:05.946374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.034 [2024-09-28 23:39:05.946409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:18.034 [2024-09-28 23:39:05.946423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.427 ms 00:18:18.034 [2024-09-28 23:39:05.946434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.034 [2024-09-28 23:39:05.968407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.034 [2024-09-28 23:39:05.968443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:18.034 [2024-09-28 23:39:05.968457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.903 ms 00:18:18.034 [2024-09-28 23:39:05.968468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.034 [2024-09-28 23:39:05.968561] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:18.034 [2024-09-28 23:39:05.968582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.968995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:18.034 [2024-09-28 23:39:05.969367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:18.035 [2024-09-28 23:39:05.969791] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:18.035 [2024-09-28 23:39:05.969799] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7631990-80f8-4fde-90d3-f206050c63ad 00:18:18.035 [2024-09-28 23:39:05.969806] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:18.035 [2024-09-28 23:39:05.969813] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:18.035 [2024-09-28 23:39:05.969820] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:18.035 [2024-09-28 23:39:05.969827] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:18.035 [2024-09-28 23:39:05.969834] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:18.035 [2024-09-28 23:39:05.969841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:18.035 [2024-09-28 23:39:05.969851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:18.035 [2024-09-28 23:39:05.969857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:18.035 [2024-09-28 23:39:05.969863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:18.035 [2024-09-28 23:39:05.969877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.035 [2024-09-28 23:39:05.969885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:18.035 [2024-09-28 23:39:05.969898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:18:18.035 [2024-09-28 23:39:05.969905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:05.981923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.035 [2024-09-28 23:39:05.981954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:18.035 [2024-09-28 23:39:05.981964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.000 ms 00:18:18.035 [2024-09-28 23:39:05.981972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:05.982322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.035 [2024-09-28 23:39:05.982344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:18.035 [2024-09-28 23:39:05.982353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:18:18.035 [2024-09-28 23:39:05.982360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:06.010124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.035 [2024-09-28 23:39:06.010159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.035 [2024-09-28 23:39:06.010168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.035 [2024-09-28 23:39:06.010180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:06.010231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.035 [2024-09-28 23:39:06.010240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.035 [2024-09-28 23:39:06.010247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.035 [2024-09-28 23:39:06.010254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:06.010304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.035 [2024-09-28 23:39:06.010313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.035 [2024-09-28 23:39:06.010321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.035 [2024-09-28 23:39:06.010328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:06.010345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.035 [2024-09-28 23:39:06.010353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.035 [2024-09-28 23:39:06.010360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.035 [2024-09-28 23:39:06.010367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:06.086867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.035 [2024-09-28 23:39:06.086910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.035 [2024-09-28 23:39:06.086920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.035 [2024-09-28 23:39:06.086927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:06.150035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.035 [2024-09-28 23:39:06.150078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.035 [2024-09-28 23:39:06.150089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.035 [2024-09-28 23:39:06.150097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:06.150156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.035 [2024-09-28 23:39:06.150165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:18.035 [2024-09-28 23:39:06.150173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.035 [2024-09-28 23:39:06.150181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:06.150212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.035 [2024-09-28 23:39:06.150224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:18.035 [2024-09-28 23:39:06.150232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.035 [2024-09-28 23:39:06.150239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:06.150320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.035 [2024-09-28 23:39:06.150330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:18.035 [2024-09-28 23:39:06.150338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.035 [2024-09-28 23:39:06.150345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.035 [2024-09-28 23:39:06.150374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.035 [2024-09-28 23:39:06.150382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:18.035 [2024-09-28 23:39:06.150392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.035 [2024-09-28 23:39:06.150399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.036 [2024-09-28 23:39:06.150429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.036 [2024-09-28 23:39:06.150437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:18.036 [2024-09-28 23:39:06.150445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.036 [2024-09-28 23:39:06.150459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.036 [2024-09-28 23:39:06.150497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.036 [2024-09-28 23:39:06.150527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:18.036 [2024-09-28 23:39:06.150535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.036 [2024-09-28 23:39:06.150543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.036 [2024-09-28 23:39:06.150647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.326 ms, result 0 00:18:19.937 00:18:19.937 00:18:19.937 23:39:08 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:19.937 [2024-09-28 23:39:08.072910] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:19.937 [2024-09-28 23:39:08.073028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75263 ] 00:18:20.195 [2024-09-28 23:39:08.222679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.452 [2024-09-28 23:39:08.399761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.712 [2024-09-28 23:39:08.648026] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:20.712 [2024-09-28 23:39:08.648089] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:20.712 [2024-09-28 23:39:08.800931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.800981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:20.712 [2024-09-28 23:39:08.800994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:20.712 [2024-09-28 23:39:08.801006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.801046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.801056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:20.712 [2024-09-28 23:39:08.801064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:20.712 [2024-09-28 23:39:08.801072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.801090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:20.712 [2024-09-28 23:39:08.801762] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:20.712 [2024-09-28 23:39:08.801784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.801791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:20.712 [2024-09-28 23:39:08.801800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:18:20.712 [2024-09-28 23:39:08.801807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.802893] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:20.712 [2024-09-28 23:39:08.814711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.814746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:20.712 [2024-09-28 23:39:08.814758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.820 ms 00:18:20.712 [2024-09-28 23:39:08.814766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.814814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.814823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:20.712 [2024-09-28 23:39:08.814831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:20.712 [2024-09-28 23:39:08.814838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.819423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.819452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.712 [2024-09-28 23:39:08.819462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:18:20.712 [2024-09-28 23:39:08.819469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.819543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.819552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.712 [2024-09-28 23:39:08.819560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:20.712 [2024-09-28 23:39:08.819567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.819610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.819620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:20.712 [2024-09-28 23:39:08.819627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:20.712 [2024-09-28 23:39:08.819635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.819655] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:20.712 [2024-09-28 23:39:08.822943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.822971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.712 [2024-09-28 23:39:08.822980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.293 ms 00:18:20.712 [2024-09-28 23:39:08.822987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.823014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.823022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:20.712 [2024-09-28 23:39:08.823030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:20.712 [2024-09-28 23:39:08.823036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.823058] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:20.712 [2024-09-28 23:39:08.823075] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:20.712 [2024-09-28 23:39:08.823109] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:20.712 [2024-09-28 23:39:08.823123] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:20.712 [2024-09-28 23:39:08.823224] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:20.712 [2024-09-28 23:39:08.823240] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:20.712 [2024-09-28 23:39:08.823251] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:20.712 [2024-09-28 23:39:08.823263] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:20.712 [2024-09-28 23:39:08.823271] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:20.712 [2024-09-28 23:39:08.823280] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:20.712 [2024-09-28 23:39:08.823287] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:20.712 [2024-09-28 23:39:08.823294] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:20.712 [2024-09-28 23:39:08.823301] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:20.712 [2024-09-28 23:39:08.823308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.823316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:20.712 [2024-09-28 23:39:08.823323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:20.712 [2024-09-28 23:39:08.823330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.823418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.712 [2024-09-28 23:39:08.823429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:20.712 [2024-09-28 23:39:08.823436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:20.712 [2024-09-28 23:39:08.823443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.712 [2024-09-28 23:39:08.823553] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:20.712 [2024-09-28 23:39:08.823568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:20.712 [2024-09-28 23:39:08.823576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:20.712 [2024-09-28 23:39:08.823584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.712 [2024-09-28 23:39:08.823592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:20.712 [2024-09-28 23:39:08.823598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:20.712 [2024-09-28 23:39:08.823605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:20.712 [2024-09-28 23:39:08.823611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:20.712 [2024-09-28 23:39:08.823619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:20.712 [2024-09-28 23:39:08.823625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:20.712 [2024-09-28 23:39:08.823632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:20.712 [2024-09-28 23:39:08.823639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:20.712 [2024-09-28 23:39:08.823645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:20.712 [2024-09-28 23:39:08.823657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:20.712 [2024-09-28 23:39:08.823664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:20.712 [2024-09-28 23:39:08.823671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.712 [2024-09-28 23:39:08.823678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:20.712 [2024-09-28 23:39:08.823685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:20.712 [2024-09-28 23:39:08.823691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.712 [2024-09-28 23:39:08.823698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:20.712 [2024-09-28 23:39:08.823704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:20.712 [2024-09-28 23:39:08.823711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.712 [2024-09-28 23:39:08.823717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:20.712 [2024-09-28 23:39:08.823723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:20.712 [2024-09-28 23:39:08.823729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.712 [2024-09-28 23:39:08.823736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:20.712 [2024-09-28 23:39:08.823742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:20.712 [2024-09-28 23:39:08.823749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.712 [2024-09-28 23:39:08.823755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:20.712 [2024-09-28 23:39:08.823761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:20.712 [2024-09-28 23:39:08.823767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.712 [2024-09-28 23:39:08.823774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:20.713 [2024-09-28 23:39:08.823780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:20.713 [2024-09-28 23:39:08.823786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:20.713 [2024-09-28 23:39:08.823792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:20.713 [2024-09-28 23:39:08.823798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:20.713 [2024-09-28 23:39:08.823805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:20.713 [2024-09-28 23:39:08.823811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:20.713 [2024-09-28 23:39:08.823817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:20.713 [2024-09-28 23:39:08.823824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.713 [2024-09-28 23:39:08.823830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:20.713 [2024-09-28 23:39:08.823836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:20.713 [2024-09-28 23:39:08.823842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.713 [2024-09-28 23:39:08.823849] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:20.713 [2024-09-28 23:39:08.823856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:20.713 [2024-09-28 23:39:08.823865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:20.713 [2024-09-28 23:39:08.823872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.713 [2024-09-28 23:39:08.823880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:20.713 [2024-09-28 23:39:08.823886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:20.713 [2024-09-28 23:39:08.823893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:20.713 [2024-09-28 23:39:08.823899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:20.713 [2024-09-28 23:39:08.823906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:20.713 [2024-09-28 23:39:08.823912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:20.713 [2024-09-28 23:39:08.823919] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:20.713 [2024-09-28 23:39:08.823928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:20.713 [2024-09-28 23:39:08.823936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:20.713 [2024-09-28 23:39:08.823944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:20.713 [2024-09-28 23:39:08.823951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:20.713 [2024-09-28 23:39:08.823957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:20.713 [2024-09-28 23:39:08.823965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:20.713 [2024-09-28 23:39:08.823971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:20.713 [2024-09-28 23:39:08.823978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:20.713 [2024-09-28 23:39:08.823985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:20.713 [2024-09-28 23:39:08.823993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:20.713 [2024-09-28 23:39:08.824000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:20.713 [2024-09-28 23:39:08.824007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:20.713 [2024-09-28 23:39:08.824014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:20.713 [2024-09-28 23:39:08.824021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:20.713 [2024-09-28 23:39:08.824027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:20.713 [2024-09-28 23:39:08.824034] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:20.713 [2024-09-28 23:39:08.824042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:20.713 [2024-09-28 23:39:08.824050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:20.713 [2024-09-28 23:39:08.824057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:20.713 [2024-09-28 23:39:08.824064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:20.713 [2024-09-28 23:39:08.824071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:20.713 [2024-09-28 23:39:08.824078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.713 [2024-09-28 23:39:08.824086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:20.713 [2024-09-28 23:39:08.824093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:18:20.713 [2024-09-28 23:39:08.824100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.713 [2024-09-28 23:39:08.866455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.713 [2024-09-28 23:39:08.866502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.713 [2024-09-28 23:39:08.866525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.299 ms 00:18:20.713 [2024-09-28 23:39:08.866533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.713 [2024-09-28 23:39:08.866624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.713 [2024-09-28 23:39:08.866633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:20.713 [2024-09-28 23:39:08.866642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:20.713 [2024-09-28 23:39:08.866649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.971 [2024-09-28 23:39:08.896560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.971 [2024-09-28 23:39:08.896593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.972 [2024-09-28 23:39:08.896605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.858 ms 00:18:20.972 [2024-09-28 23:39:08.896613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:08.896642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:08.896650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.972 [2024-09-28 23:39:08.896658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:20.972 [2024-09-28 23:39:08.896665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:08.897008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:08.897036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.972 [2024-09-28 23:39:08.897045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:18:20.972 [2024-09-28 23:39:08.897057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:08.897178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:08.897193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.972 [2024-09-28 23:39:08.897202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:20.972 [2024-09-28 23:39:08.897209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:08.909412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:08.909444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.972 [2024-09-28 23:39:08.909454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.183 ms 00:18:20.972 [2024-09-28 23:39:08.909461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:08.921785] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:20.972 [2024-09-28 23:39:08.921818] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:20.972 [2024-09-28 23:39:08.921829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:08.921836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:20.972 [2024-09-28 23:39:08.921845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.263 ms 00:18:20.972 [2024-09-28 23:39:08.921851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:08.945800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:08.945834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:20.972 [2024-09-28 23:39:08.945845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.912 ms 00:18:20.972 [2024-09-28 23:39:08.945853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:08.957142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:08.957173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:20.972 [2024-09-28 23:39:08.957182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.253 ms 00:18:20.972 [2024-09-28 23:39:08.957189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:08.968108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:08.968138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:20.972 [2024-09-28 23:39:08.968147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.880 ms 00:18:20.972 [2024-09-28 23:39:08.968155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:08.968763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:08.968787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:20.972 [2024-09-28 23:39:08.968796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:18:20.972 [2024-09-28 23:39:08.968803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:09.023260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:09.023305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:20.972 [2024-09-28 23:39:09.023317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.441 ms 00:18:20.972 [2024-09-28 23:39:09.023325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:09.033372] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:20.972 [2024-09-28 23:39:09.035533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:09.035563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:20.972 [2024-09-28 23:39:09.035574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.166 ms 00:18:20.972 [2024-09-28 23:39:09.035586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:09.035667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:09.035678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:20.972 [2024-09-28 23:39:09.035688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:20.972 [2024-09-28 23:39:09.035697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:09.035760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:09.035771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:20.972 [2024-09-28 23:39:09.035780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:20.972 [2024-09-28 23:39:09.035789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:09.035809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:09.035817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:20.972 [2024-09-28 23:39:09.035825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:20.972 [2024-09-28 23:39:09.035832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:09.035860] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:20.972 [2024-09-28 23:39:09.035876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:09.035883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:20.972 [2024-09-28 23:39:09.035893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:20.972 [2024-09-28 23:39:09.035901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:09.059066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:09.059099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:20.972 [2024-09-28 23:39:09.059110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.148 ms 00:18:20.972 [2024-09-28 23:39:09.059118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:09.059187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.972 [2024-09-28 23:39:09.059197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:20.972 [2024-09-28 23:39:09.059206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:20.972 [2024-09-28 23:39:09.059213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.972 [2024-09-28 23:39:09.060063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.741 ms, result 0 00:18:43.004  Copying: 48/1024 [MB] (48 MBps) Copying: 97/1024 [MB] (49 MBps) Copying: 148/1024 [MB] (50 MBps) Copying: 198/1024 [MB] (50 MBps) Copying: 247/1024 [MB] (49 MBps) Copying: 295/1024 [MB] (48 MBps) Copying: 342/1024 [MB] (46 MBps) Copying: 390/1024 [MB] (48 MBps) Copying: 440/1024 [MB] (49 MBps) Copying: 489/1024 [MB] (49 MBps) Copying: 539/1024 [MB] (50 MBps) Copying: 589/1024 [MB] (49 MBps) Copying: 639/1024 [MB] (49 MBps) Copying: 689/1024 [MB] (50 MBps) Copying: 738/1024 [MB] (48 MBps) Copying: 783/1024 [MB] (45 MBps) Copying: 833/1024 [MB] (49 MBps) Copying: 883/1024 [MB] (50 MBps) Copying: 933/1024 [MB] (50 MBps) Copying: 981/1024 [MB] (47 MBps) Copying: 1024/1024 [MB] (average 49 MBps)[2024-09-28 23:39:30.927676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.004 [2024-09-28 23:39:30.927737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:43.004 [2024-09-28 23:39:30.927751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:43.004 [2024-09-28 23:39:30.927763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.004 [2024-09-28 23:39:30.927784] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:43.004 [2024-09-28 23:39:30.930389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.004 [2024-09-28 23:39:30.930422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:43.004 [2024-09-28 23:39:30.930432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:18:43.004 [2024-09-28 23:39:30.930440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.004 [2024-09-28 23:39:30.930674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.004 [2024-09-28 23:39:30.930691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:43.004 [2024-09-28 23:39:30.930700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:18:43.004 [2024-09-28 23:39:30.930708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.004 [2024-09-28 23:39:30.934136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.004 [2024-09-28 23:39:30.934157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:43.004 [2024-09-28 23:39:30.934165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.413 ms 00:18:43.004 [2024-09-28 23:39:30.934172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.004 [2024-09-28 23:39:30.941505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.004 [2024-09-28 23:39:30.941556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:43.004 [2024-09-28 23:39:30.941566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.319 ms 00:18:43.004 [2024-09-28 23:39:30.941574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.004 [2024-09-28 23:39:30.966313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.004 [2024-09-28 23:39:30.966353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:43.004 [2024-09-28 23:39:30.966365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.674 ms 00:18:43.004 [2024-09-28 23:39:30.966372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.004 [2024-09-28 23:39:30.980296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.004 [2024-09-28 23:39:30.980338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:43.004 [2024-09-28 23:39:30.980350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.902 ms 00:18:43.004 [2024-09-28 23:39:30.980359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.004 [2024-09-28 23:39:30.980489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.004 [2024-09-28 23:39:30.980500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:43.004 [2024-09-28 23:39:30.980526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:43.005 [2024-09-28 23:39:30.980534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.005 [2024-09-28 23:39:31.005958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.005 [2024-09-28 23:39:31.005994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:43.005 [2024-09-28 23:39:31.006005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.409 ms 00:18:43.005 [2024-09-28 23:39:31.006013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.005 [2024-09-28 23:39:31.028228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.005 [2024-09-28 23:39:31.028264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:43.005 [2024-09-28 23:39:31.028275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.195 ms 00:18:43.005 [2024-09-28 23:39:31.028282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.005 [2024-09-28 23:39:31.050327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.005 [2024-09-28 23:39:31.050361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:43.005 [2024-09-28 23:39:31.050371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.027 ms 00:18:43.005 [2024-09-28 23:39:31.050378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.005 [2024-09-28 23:39:31.072331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.005 [2024-09-28 23:39:31.072364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:43.005 [2024-09-28 23:39:31.072374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.912 ms 00:18:43.005 [2024-09-28 23:39:31.072381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.005 [2024-09-28 23:39:31.072398] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:43.005 [2024-09-28 23:39:31.072412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:43.005 [2024-09-28 23:39:31.072981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.072989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.072996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:43.006 [2024-09-28 23:39:31.073174] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:43.006 [2024-09-28 23:39:31.073181] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7631990-80f8-4fde-90d3-f206050c63ad 00:18:43.006 [2024-09-28 23:39:31.073189] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:43.006 [2024-09-28 23:39:31.073196] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:43.006 [2024-09-28 23:39:31.073204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:43.006 [2024-09-28 23:39:31.073212] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:43.006 [2024-09-28 23:39:31.073219] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:43.006 [2024-09-28 23:39:31.073230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:43.006 [2024-09-28 23:39:31.073237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:43.006 [2024-09-28 23:39:31.073244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:43.006 [2024-09-28 23:39:31.073250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:43.006 [2024-09-28 23:39:31.073257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.006 [2024-09-28 23:39:31.073270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:43.006 [2024-09-28 23:39:31.073278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:18:43.006 [2024-09-28 23:39:31.073285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.006 [2024-09-28 23:39:31.085397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.006 [2024-09-28 23:39:31.085428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:43.006 [2024-09-28 23:39:31.085438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.096 ms 00:18:43.006 [2024-09-28 23:39:31.085450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.006 [2024-09-28 23:39:31.085810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.006 [2024-09-28 23:39:31.085830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:43.006 [2024-09-28 23:39:31.085838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:18:43.006 [2024-09-28 23:39:31.085845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.006 [2024-09-28 23:39:31.113537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.006 [2024-09-28 23:39:31.113572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.006 [2024-09-28 23:39:31.113581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.006 [2024-09-28 23:39:31.113593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.006 [2024-09-28 23:39:31.113647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.006 [2024-09-28 23:39:31.113655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.006 [2024-09-28 23:39:31.113662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.006 [2024-09-28 23:39:31.113669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.006 [2024-09-28 23:39:31.113717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.006 [2024-09-28 23:39:31.113727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.006 [2024-09-28 23:39:31.113734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.006 [2024-09-28 23:39:31.113741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.006 [2024-09-28 23:39:31.113759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.006 [2024-09-28 23:39:31.113766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.006 [2024-09-28 23:39:31.113773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.006 [2024-09-28 23:39:31.113780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.265 [2024-09-28 23:39:31.189105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.265 [2024-09-28 23:39:31.189155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.265 [2024-09-28 23:39:31.189165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.265 [2024-09-28 23:39:31.189177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.265 [2024-09-28 23:39:31.250500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.265 [2024-09-28 23:39:31.250566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.265 [2024-09-28 23:39:31.250576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.265 [2024-09-28 23:39:31.250584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.265 [2024-09-28 23:39:31.250649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.265 [2024-09-28 23:39:31.250658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.265 [2024-09-28 23:39:31.250666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.265 [2024-09-28 23:39:31.250674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.265 [2024-09-28 23:39:31.250709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.265 [2024-09-28 23:39:31.250718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.265 [2024-09-28 23:39:31.250725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.265 [2024-09-28 23:39:31.250733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.266 [2024-09-28 23:39:31.250816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.266 [2024-09-28 23:39:31.250825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.266 [2024-09-28 23:39:31.250832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.266 [2024-09-28 23:39:31.250839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.266 [2024-09-28 23:39:31.250864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.266 [2024-09-28 23:39:31.250876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:43.266 [2024-09-28 23:39:31.250883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.266 [2024-09-28 23:39:31.250890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.266 [2024-09-28 23:39:31.250922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.266 [2024-09-28 23:39:31.250930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.266 [2024-09-28 23:39:31.250938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.266 [2024-09-28 23:39:31.250945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.266 [2024-09-28 23:39:31.250983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.266 [2024-09-28 23:39:31.250992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.266 [2024-09-28 23:39:31.251000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.266 [2024-09-28 23:39:31.251007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.266 [2024-09-28 23:39:31.251108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 323.412 ms, result 0 00:18:44.202 00:18:44.202 00:18:44.202 23:39:32 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:46.106 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:18:46.106 23:39:34 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:18:46.106 [2024-09-28 23:39:34.114606] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:46.106 [2024-09-28 23:39:34.114727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75542 ] 00:18:46.106 [2024-09-28 23:39:34.262434] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.365 [2024-09-28 23:39:34.439960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.624 [2024-09-28 23:39:34.689443] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.624 [2024-09-28 23:39:34.689522] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.883 [2024-09-28 23:39:34.842868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.883 [2024-09-28 23:39:34.842912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:46.883 [2024-09-28 23:39:34.842924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:46.883 [2024-09-28 23:39:34.842936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.842977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.842987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.884 [2024-09-28 23:39:34.842995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:46.884 [2024-09-28 23:39:34.843002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.843018] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:46.884 [2024-09-28 23:39:34.843682] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:46.884 [2024-09-28 23:39:34.843705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.843713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.884 [2024-09-28 23:39:34.843721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:18:46.884 [2024-09-28 23:39:34.843729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.844747] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:46.884 [2024-09-28 23:39:34.857137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.857176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:46.884 [2024-09-28 23:39:34.857187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.390 ms 00:18:46.884 [2024-09-28 23:39:34.857194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.857246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.857256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:46.884 [2024-09-28 23:39:34.857264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:46.884 [2024-09-28 23:39:34.857270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.861895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.861928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.884 [2024-09-28 23:39:34.861937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.566 ms 00:18:46.884 [2024-09-28 23:39:34.861944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.862016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.862025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.884 [2024-09-28 23:39:34.862033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:46.884 [2024-09-28 23:39:34.862040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.862080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.862089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:46.884 [2024-09-28 23:39:34.862097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:46.884 [2024-09-28 23:39:34.862104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.862124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:46.884 [2024-09-28 23:39:34.865494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.865530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.884 [2024-09-28 23:39:34.865539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.374 ms 00:18:46.884 [2024-09-28 23:39:34.865546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.865573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.865581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:46.884 [2024-09-28 23:39:34.865589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:46.884 [2024-09-28 23:39:34.865596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.865617] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:46.884 [2024-09-28 23:39:34.865634] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:46.884 [2024-09-28 23:39:34.865667] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:46.884 [2024-09-28 23:39:34.865681] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:46.884 [2024-09-28 23:39:34.865783] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:46.884 [2024-09-28 23:39:34.865799] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:46.884 [2024-09-28 23:39:34.865809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:46.884 [2024-09-28 23:39:34.865821] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:46.884 [2024-09-28 23:39:34.865830] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:46.884 [2024-09-28 23:39:34.865838] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:46.884 [2024-09-28 23:39:34.865845] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:46.884 [2024-09-28 23:39:34.865852] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:46.884 [2024-09-28 23:39:34.865859] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:46.884 [2024-09-28 23:39:34.865867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.865874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:46.884 [2024-09-28 23:39:34.865882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:46.884 [2024-09-28 23:39:34.865889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.865970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.884 [2024-09-28 23:39:34.865985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:46.884 [2024-09-28 23:39:34.865993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:46.884 [2024-09-28 23:39:34.866000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.884 [2024-09-28 23:39:34.866112] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:46.884 [2024-09-28 23:39:34.866127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:46.884 [2024-09-28 23:39:34.866135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.884 [2024-09-28 23:39:34.866143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:46.884 [2024-09-28 23:39:34.866157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:46.884 [2024-09-28 23:39:34.866170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:46.884 [2024-09-28 23:39:34.866177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.884 [2024-09-28 23:39:34.866191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:46.884 [2024-09-28 23:39:34.866198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:46.884 [2024-09-28 23:39:34.866204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.884 [2024-09-28 23:39:34.866216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:46.884 [2024-09-28 23:39:34.866222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:46.884 [2024-09-28 23:39:34.866230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:46.884 [2024-09-28 23:39:34.866243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:46.884 [2024-09-28 23:39:34.866249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:46.884 [2024-09-28 23:39:34.866261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.884 [2024-09-28 23:39:34.866274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:46.884 [2024-09-28 23:39:34.866280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.884 [2024-09-28 23:39:34.866292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:46.884 [2024-09-28 23:39:34.866298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.884 [2024-09-28 23:39:34.866311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:46.884 [2024-09-28 23:39:34.866317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.884 [2024-09-28 23:39:34.866330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:46.884 [2024-09-28 23:39:34.866336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.884 [2024-09-28 23:39:34.866348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:46.884 [2024-09-28 23:39:34.866354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:46.884 [2024-09-28 23:39:34.866360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.884 [2024-09-28 23:39:34.866366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:46.884 [2024-09-28 23:39:34.866372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:46.884 [2024-09-28 23:39:34.866378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.884 [2024-09-28 23:39:34.866385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:46.884 [2024-09-28 23:39:34.866391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:46.885 [2024-09-28 23:39:34.866397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.885 [2024-09-28 23:39:34.866403] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:46.885 [2024-09-28 23:39:34.866411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:46.885 [2024-09-28 23:39:34.866419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.885 [2024-09-28 23:39:34.866426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.885 [2024-09-28 23:39:34.866435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:46.885 [2024-09-28 23:39:34.866442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:46.885 [2024-09-28 23:39:34.866448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:46.885 [2024-09-28 23:39:34.866455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:46.885 [2024-09-28 23:39:34.866461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:46.885 [2024-09-28 23:39:34.866468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:46.885 [2024-09-28 23:39:34.866476] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:46.885 [2024-09-28 23:39:34.866484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.885 [2024-09-28 23:39:34.866492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:46.885 [2024-09-28 23:39:34.866499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:46.885 [2024-09-28 23:39:34.866533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:46.885 [2024-09-28 23:39:34.866542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:46.885 [2024-09-28 23:39:34.866549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:46.885 [2024-09-28 23:39:34.866557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:46.885 [2024-09-28 23:39:34.866564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:46.885 [2024-09-28 23:39:34.866570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:46.885 [2024-09-28 23:39:34.866578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:46.885 [2024-09-28 23:39:34.866585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:46.885 [2024-09-28 23:39:34.866592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:46.885 [2024-09-28 23:39:34.866598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:46.885 [2024-09-28 23:39:34.866605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:46.885 [2024-09-28 23:39:34.866612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:46.885 [2024-09-28 23:39:34.866620] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:46.885 [2024-09-28 23:39:34.866627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.885 [2024-09-28 23:39:34.866635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:46.885 [2024-09-28 23:39:34.866642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:46.885 [2024-09-28 23:39:34.866649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:46.885 [2024-09-28 23:39:34.866656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:46.885 [2024-09-28 23:39:34.866663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.866670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:46.885 [2024-09-28 23:39:34.866678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:18:46.885 [2024-09-28 23:39:34.866685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:34.902694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.902746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.885 [2024-09-28 23:39:34.902762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.963 ms 00:18:46.885 [2024-09-28 23:39:34.902773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:34.902887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.902895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:46.885 [2024-09-28 23:39:34.902903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:46.885 [2024-09-28 23:39:34.902910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:34.932963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.933001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.885 [2024-09-28 23:39:34.933013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.997 ms 00:18:46.885 [2024-09-28 23:39:34.933021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:34.933053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.933062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.885 [2024-09-28 23:39:34.933070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:46.885 [2024-09-28 23:39:34.933077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:34.933416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.933441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.885 [2024-09-28 23:39:34.933450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:18:46.885 [2024-09-28 23:39:34.933461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:34.933591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.933600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.885 [2024-09-28 23:39:34.933609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:18:46.885 [2024-09-28 23:39:34.933616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:34.945774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.945810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.885 [2024-09-28 23:39:34.945820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.139 ms 00:18:46.885 [2024-09-28 23:39:34.945827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:34.958121] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:46.885 [2024-09-28 23:39:34.958156] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:46.885 [2024-09-28 23:39:34.958167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.958174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:46.885 [2024-09-28 23:39:34.958183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.233 ms 00:18:46.885 [2024-09-28 23:39:34.958190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:34.982356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.982391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:46.885 [2024-09-28 23:39:34.982401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.129 ms 00:18:46.885 [2024-09-28 23:39:34.982408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:34.993799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:34.993831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:46.885 [2024-09-28 23:39:34.993840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.355 ms 00:18:46.885 [2024-09-28 23:39:34.993847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:35.005143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:35.005173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:46.885 [2024-09-28 23:39:35.005183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.256 ms 00:18:46.885 [2024-09-28 23:39:35.005190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.885 [2024-09-28 23:39:35.005804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.885 [2024-09-28 23:39:35.005829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:46.885 [2024-09-28 23:39:35.005838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:18:46.885 [2024-09-28 23:39:35.005845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.145 [2024-09-28 23:39:35.060203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.145 [2024-09-28 23:39:35.060256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:47.145 [2024-09-28 23:39:35.060268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.340 ms 00:18:47.145 [2024-09-28 23:39:35.060276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.145 [2024-09-28 23:39:35.070617] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:47.145 [2024-09-28 23:39:35.072883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.145 [2024-09-28 23:39:35.072914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:47.145 [2024-09-28 23:39:35.072930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.562 ms 00:18:47.145 [2024-09-28 23:39:35.072939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.145 [2024-09-28 23:39:35.073026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.145 [2024-09-28 23:39:35.073038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:47.145 [2024-09-28 23:39:35.073047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:47.145 [2024-09-28 23:39:35.073055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.145 [2024-09-28 23:39:35.073120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.145 [2024-09-28 23:39:35.073130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:47.145 [2024-09-28 23:39:35.073140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:47.145 [2024-09-28 23:39:35.073150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.145 [2024-09-28 23:39:35.073170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.145 [2024-09-28 23:39:35.073179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:47.145 [2024-09-28 23:39:35.073188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:47.145 [2024-09-28 23:39:35.073196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.145 [2024-09-28 23:39:35.073226] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:47.145 [2024-09-28 23:39:35.073236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.145 [2024-09-28 23:39:35.073246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:47.145 [2024-09-28 23:39:35.073254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:47.145 [2024-09-28 23:39:35.073260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.145 [2024-09-28 23:39:35.095898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.145 [2024-09-28 23:39:35.095935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:47.145 [2024-09-28 23:39:35.095945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.619 ms 00:18:47.145 [2024-09-28 23:39:35.095953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.145 [2024-09-28 23:39:35.096020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.145 [2024-09-28 23:39:35.096030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:47.145 [2024-09-28 23:39:35.096038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:47.145 [2024-09-28 23:39:35.096047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.145 [2024-09-28 23:39:35.097295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 253.956 ms, result 0 00:19:10.079  Copying: 45/1024 [MB] (45 MBps) Copying: 96/1024 [MB] (51 MBps) Copying: 142/1024 [MB] (45 MBps) Copying: 187/1024 [MB] (45 MBps) Copying: 233/1024 [MB] (46 MBps) Copying: 278/1024 [MB] (44 MBps) Copying: 323/1024 [MB] (44 MBps) Copying: 368/1024 [MB] (45 MBps) Copying: 415/1024 [MB] (46 MBps) Copying: 464/1024 [MB] (49 MBps) Copying: 510/1024 [MB] (45 MBps) Copying: 559/1024 [MB] (49 MBps) Copying: 604/1024 [MB] (45 MBps) Copying: 652/1024 [MB] (47 MBps) Copying: 705/1024 [MB] (53 MBps) Copying: 752/1024 [MB] (47 MBps) Copying: 797/1024 [MB] (44 MBps) Copying: 843/1024 [MB] (46 MBps) Copying: 889/1024 [MB] (45 MBps) Copying: 935/1024 [MB] (46 MBps) Copying: 982/1024 [MB] (46 MBps) Copying: 1023/1024 [MB] (41 MBps) Copying: 1024/1024 [MB] (average 44 MBps)[2024-09-28 23:39:58.045076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.045135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:10.079 [2024-09-28 23:39:58.045149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:10.079 [2024-09-28 23:39:58.045157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.079 [2024-09-28 23:39:58.046112] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:10.079 [2024-09-28 23:39:58.050471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.050514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:10.079 [2024-09-28 23:39:58.050529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.330 ms 00:19:10.079 [2024-09-28 23:39:58.050537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.079 [2024-09-28 23:39:58.063122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.063159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:10.079 [2024-09-28 23:39:58.063170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.113 ms 00:19:10.079 [2024-09-28 23:39:58.063178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.079 [2024-09-28 23:39:58.081937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.081969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:10.079 [2024-09-28 23:39:58.081979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.744 ms 00:19:10.079 [2024-09-28 23:39:58.081994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.079 [2024-09-28 23:39:58.088222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.088263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:10.079 [2024-09-28 23:39:58.088273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.202 ms 00:19:10.079 [2024-09-28 23:39:58.088281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.079 [2024-09-28 23:39:58.111277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.111313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:10.079 [2024-09-28 23:39:58.111323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.957 ms 00:19:10.079 [2024-09-28 23:39:58.111332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.079 [2024-09-28 23:39:58.125213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.125246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:10.079 [2024-09-28 23:39:58.125257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.852 ms 00:19:10.079 [2024-09-28 23:39:58.125265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.079 [2024-09-28 23:39:58.177552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.177593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:10.079 [2024-09-28 23:39:58.177603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.254 ms 00:19:10.079 [2024-09-28 23:39:58.177611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.079 [2024-09-28 23:39:58.200671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.200706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:10.079 [2024-09-28 23:39:58.200716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.047 ms 00:19:10.079 [2024-09-28 23:39:58.200723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.079 [2024-09-28 23:39:58.222912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.222943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:10.079 [2024-09-28 23:39:58.222953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.159 ms 00:19:10.079 [2024-09-28 23:39:58.222960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.079 [2024-09-28 23:39:58.245067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.079 [2024-09-28 23:39:58.245098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:10.079 [2024-09-28 23:39:58.245107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.078 ms 00:19:10.079 [2024-09-28 23:39:58.245114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.339 [2024-09-28 23:39:58.267176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.339 [2024-09-28 23:39:58.267207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:10.339 [2024-09-28 23:39:58.267216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.011 ms 00:19:10.339 [2024-09-28 23:39:58.267223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.339 [2024-09-28 23:39:58.267252] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:10.339 [2024-09-28 23:39:58.267266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121856 / 261120 wr_cnt: 1 state: open 00:19:10.339 [2024-09-28 23:39:58.267275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:10.339 [2024-09-28 23:39:58.267414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.267999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.268006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:10.340 [2024-09-28 23:39:58.268021] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:10.340 [2024-09-28 23:39:58.268033] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7631990-80f8-4fde-90d3-f206050c63ad 00:19:10.340 [2024-09-28 23:39:58.268040] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121856 00:19:10.340 [2024-09-28 23:39:58.268048] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122816 00:19:10.340 [2024-09-28 23:39:58.268054] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121856 00:19:10.340 [2024-09-28 23:39:58.268062] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:19:10.340 [2024-09-28 23:39:58.268069] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:10.340 [2024-09-28 23:39:58.268077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:10.340 [2024-09-28 23:39:58.268084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:10.340 [2024-09-28 23:39:58.268090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:10.340 [2024-09-28 23:39:58.268096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:10.340 [2024-09-28 23:39:58.268103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.340 [2024-09-28 23:39:58.268116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:10.341 [2024-09-28 23:39:58.268124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:19:10.341 [2024-09-28 23:39:58.268131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.280635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.341 [2024-09-28 23:39:58.280666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:10.341 [2024-09-28 23:39:58.280676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.486 ms 00:19:10.341 [2024-09-28 23:39:58.280683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.281019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.341 [2024-09-28 23:39:58.281038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:10.341 [2024-09-28 23:39:58.281047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:19:10.341 [2024-09-28 23:39:58.281053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.308759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.308794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:10.341 [2024-09-28 23:39:58.308804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.308812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.308864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.308877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:10.341 [2024-09-28 23:39:58.308885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.308891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.308958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.308968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:10.341 [2024-09-28 23:39:58.308975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.308982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.308996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.309003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:10.341 [2024-09-28 23:39:58.309010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.309020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.385856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.385905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.341 [2024-09-28 23:39:58.385916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.385924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.448618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.448662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.341 [2024-09-28 23:39:58.448676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.448684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.448733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.448742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.341 [2024-09-28 23:39:58.448749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.448756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.448803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.448812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.341 [2024-09-28 23:39:58.448819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.448826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.448910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.448919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.341 [2024-09-28 23:39:58.448927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.448934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.448959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.448968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:10.341 [2024-09-28 23:39:58.448975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.448982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.449016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.449024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.341 [2024-09-28 23:39:58.449031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.449038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.449077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:10.341 [2024-09-28 23:39:58.449086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.341 [2024-09-28 23:39:58.449094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:10.341 [2024-09-28 23:39:58.449101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.341 [2024-09-28 23:39:58.449205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.898 ms, result 0 00:19:12.873 00:19:12.873 00:19:12.873 23:40:01 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:13.132 [2024-09-28 23:40:01.100301] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:13.132 [2024-09-28 23:40:01.100428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75806 ] 00:19:13.132 [2024-09-28 23:40:01.250790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.390 [2024-09-28 23:40:01.431665] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.649 [2024-09-28 23:40:01.680640] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:13.649 [2024-09-28 23:40:01.680704] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:13.909 [2024-09-28 23:40:01.834094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.909 [2024-09-28 23:40:01.834150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:13.909 [2024-09-28 23:40:01.834163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:13.909 [2024-09-28 23:40:01.834174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.909 [2024-09-28 23:40:01.834216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.909 [2024-09-28 23:40:01.834226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:13.909 [2024-09-28 23:40:01.834234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:13.909 [2024-09-28 23:40:01.834242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.909 [2024-09-28 23:40:01.834261] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:13.909 [2024-09-28 23:40:01.834988] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:13.909 [2024-09-28 23:40:01.835013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.909 [2024-09-28 23:40:01.835022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:13.909 [2024-09-28 23:40:01.835030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:19:13.909 [2024-09-28 23:40:01.835037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.909 [2024-09-28 23:40:01.836061] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:13.909 [2024-09-28 23:40:01.848179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.909 [2024-09-28 23:40:01.848214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:13.909 [2024-09-28 23:40:01.848226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.119 ms 00:19:13.909 [2024-09-28 23:40:01.848233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.909 [2024-09-28 23:40:01.848288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.909 [2024-09-28 23:40:01.848297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:13.909 [2024-09-28 23:40:01.848306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:13.909 [2024-09-28 23:40:01.848313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.909 [2024-09-28 23:40:01.852883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.909 [2024-09-28 23:40:01.852917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:13.909 [2024-09-28 23:40:01.852926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.511 ms 00:19:13.909 [2024-09-28 23:40:01.852933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.909 [2024-09-28 23:40:01.853006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.909 [2024-09-28 23:40:01.853015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:13.909 [2024-09-28 23:40:01.853023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:13.909 [2024-09-28 23:40:01.853030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.909 [2024-09-28 23:40:01.853071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.909 [2024-09-28 23:40:01.853080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:13.909 [2024-09-28 23:40:01.853088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:13.909 [2024-09-28 23:40:01.853095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.909 [2024-09-28 23:40:01.853115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:13.909 [2024-09-28 23:40:01.856562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.909 [2024-09-28 23:40:01.856591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:13.909 [2024-09-28 23:40:01.856600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.453 ms 00:19:13.909 [2024-09-28 23:40:01.856607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.909 [2024-09-28 23:40:01.856636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.909 [2024-09-28 23:40:01.856644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:13.909 [2024-09-28 23:40:01.856652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:13.909 [2024-09-28 23:40:01.856659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.910 [2024-09-28 23:40:01.856680] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:13.910 [2024-09-28 23:40:01.856697] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:13.910 [2024-09-28 23:40:01.856730] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:13.910 [2024-09-28 23:40:01.856744] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:13.910 [2024-09-28 23:40:01.856845] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:13.910 [2024-09-28 23:40:01.856860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:13.910 [2024-09-28 23:40:01.856870] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:13.910 [2024-09-28 23:40:01.856882] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:13.910 [2024-09-28 23:40:01.856891] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:13.910 [2024-09-28 23:40:01.856899] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:13.910 [2024-09-28 23:40:01.856907] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:13.910 [2024-09-28 23:40:01.856914] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:13.910 [2024-09-28 23:40:01.856921] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:13.910 [2024-09-28 23:40:01.856928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.910 [2024-09-28 23:40:01.856936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:13.910 [2024-09-28 23:40:01.856943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:13.910 [2024-09-28 23:40:01.856950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.910 [2024-09-28 23:40:01.857033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.910 [2024-09-28 23:40:01.857042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:13.910 [2024-09-28 23:40:01.857050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:13.910 [2024-09-28 23:40:01.857057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.910 [2024-09-28 23:40:01.857170] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:13.910 [2024-09-28 23:40:01.857188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:13.910 [2024-09-28 23:40:01.857196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:13.910 [2024-09-28 23:40:01.857204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:13.910 [2024-09-28 23:40:01.857218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:13.910 [2024-09-28 23:40:01.857232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:13.910 [2024-09-28 23:40:01.857239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:13.910 [2024-09-28 23:40:01.857252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:13.910 [2024-09-28 23:40:01.857259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:13.910 [2024-09-28 23:40:01.857266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:13.910 [2024-09-28 23:40:01.857277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:13.910 [2024-09-28 23:40:01.857284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:13.910 [2024-09-28 23:40:01.857290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:13.910 [2024-09-28 23:40:01.857303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:13.910 [2024-09-28 23:40:01.857309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:13.910 [2024-09-28 23:40:01.857324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.910 [2024-09-28 23:40:01.857338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:13.910 [2024-09-28 23:40:01.857345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.910 [2024-09-28 23:40:01.857357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:13.910 [2024-09-28 23:40:01.857363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.910 [2024-09-28 23:40:01.857376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:13.910 [2024-09-28 23:40:01.857382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.910 [2024-09-28 23:40:01.857395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:13.910 [2024-09-28 23:40:01.857402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:13.910 [2024-09-28 23:40:01.857414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:13.910 [2024-09-28 23:40:01.857420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:13.910 [2024-09-28 23:40:01.857427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:13.910 [2024-09-28 23:40:01.857433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:13.910 [2024-09-28 23:40:01.857439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:13.910 [2024-09-28 23:40:01.857446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:13.910 [2024-09-28 23:40:01.857459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:13.910 [2024-09-28 23:40:01.857465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857471] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:13.910 [2024-09-28 23:40:01.857478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:13.910 [2024-09-28 23:40:01.857487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:13.910 [2024-09-28 23:40:01.857494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.910 [2024-09-28 23:40:01.857501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:13.910 [2024-09-28 23:40:01.857520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:13.910 [2024-09-28 23:40:01.857527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:13.910 [2024-09-28 23:40:01.857534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:13.910 [2024-09-28 23:40:01.857541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:13.910 [2024-09-28 23:40:01.857547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:13.910 [2024-09-28 23:40:01.857556] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:13.910 [2024-09-28 23:40:01.857565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:13.910 [2024-09-28 23:40:01.857574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:13.910 [2024-09-28 23:40:01.857581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:13.910 [2024-09-28 23:40:01.857588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:13.910 [2024-09-28 23:40:01.857595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:13.910 [2024-09-28 23:40:01.857603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:13.910 [2024-09-28 23:40:01.857610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:13.910 [2024-09-28 23:40:01.857617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:13.910 [2024-09-28 23:40:01.857624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:13.910 [2024-09-28 23:40:01.857631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:13.910 [2024-09-28 23:40:01.857637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:13.910 [2024-09-28 23:40:01.857644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:13.910 [2024-09-28 23:40:01.857651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:13.910 [2024-09-28 23:40:01.857658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:13.910 [2024-09-28 23:40:01.857665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:13.910 [2024-09-28 23:40:01.857672] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:13.910 [2024-09-28 23:40:01.857680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:13.910 [2024-09-28 23:40:01.857688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:13.910 [2024-09-28 23:40:01.857695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:13.910 [2024-09-28 23:40:01.857702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:13.910 [2024-09-28 23:40:01.857709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:13.910 [2024-09-28 23:40:01.857717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.857724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:13.911 [2024-09-28 23:40:01.857731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:19:13.911 [2024-09-28 23:40:01.857738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.893813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.894006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.911 [2024-09-28 23:40:01.894023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.030 ms 00:19:13.911 [2024-09-28 23:40:01.894032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.894131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.894141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:13.911 [2024-09-28 23:40:01.894149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:13.911 [2024-09-28 23:40:01.894156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.923966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.924004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.911 [2024-09-28 23:40:01.924016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.753 ms 00:19:13.911 [2024-09-28 23:40:01.924024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.924057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.924065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.911 [2024-09-28 23:40:01.924073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:13.911 [2024-09-28 23:40:01.924080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.924413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.924427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.911 [2024-09-28 23:40:01.924436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:19:13.911 [2024-09-28 23:40:01.924447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.924590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.924600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.911 [2024-09-28 23:40:01.924610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:19:13.911 [2024-09-28 23:40:01.924618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.936768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.936895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.911 [2024-09-28 23:40:01.936911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.130 ms 00:19:13.911 [2024-09-28 23:40:01.936919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.949119] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:13.911 [2024-09-28 23:40:01.949152] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:13.911 [2024-09-28 23:40:01.949162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.949170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:13.911 [2024-09-28 23:40:01.949179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.130 ms 00:19:13.911 [2024-09-28 23:40:01.949186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.973266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.973316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:13.911 [2024-09-28 23:40:01.973327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.043 ms 00:19:13.911 [2024-09-28 23:40:01.973334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.984929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.984962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:13.911 [2024-09-28 23:40:01.984971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.554 ms 00:19:13.911 [2024-09-28 23:40:01.984978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.996019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.996144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:13.911 [2024-09-28 23:40:01.996161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.008 ms 00:19:13.911 [2024-09-28 23:40:01.996168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:01.996788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:01.996809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:13.911 [2024-09-28 23:40:01.996818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:19:13.911 [2024-09-28 23:40:01.996826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:02.051854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:02.051908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:13.911 [2024-09-28 23:40:02.051920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.010 ms 00:19:13.911 [2024-09-28 23:40:02.051928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:02.062402] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:13.911 [2024-09-28 23:40:02.064909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:02.064939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:13.911 [2024-09-28 23:40:02.064951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.925 ms 00:19:13.911 [2024-09-28 23:40:02.064963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:02.065056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:02.065067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:13.911 [2024-09-28 23:40:02.065076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:13.911 [2024-09-28 23:40:02.065083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:02.066411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:02.066442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:13.911 [2024-09-28 23:40:02.066452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:19:13.911 [2024-09-28 23:40:02.066459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:02.066488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:02.066496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:13.911 [2024-09-28 23:40:02.066504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:13.911 [2024-09-28 23:40:02.066536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.911 [2024-09-28 23:40:02.066603] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:13.911 [2024-09-28 23:40:02.066614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.911 [2024-09-28 23:40:02.066621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:13.911 [2024-09-28 23:40:02.066632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:13.911 [2024-09-28 23:40:02.066639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.170 [2024-09-28 23:40:02.089765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.170 [2024-09-28 23:40:02.089901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.170 [2024-09-28 23:40:02.089918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.106 ms 00:19:14.170 [2024-09-28 23:40:02.089926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.170 [2024-09-28 23:40:02.089997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.170 [2024-09-28 23:40:02.090008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.170 [2024-09-28 23:40:02.090016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:14.170 [2024-09-28 23:40:02.090023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.170 [2024-09-28 23:40:02.090926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 256.385 ms, result 0 00:19:35.627  Copying: 44/1024 [MB] (44 MBps) Copying: 95/1024 [MB] (50 MBps) Copying: 144/1024 [MB] (49 MBps) Copying: 190/1024 [MB] (46 MBps) Copying: 240/1024 [MB] (49 MBps) Copying: 286/1024 [MB] (45 MBps) Copying: 333/1024 [MB] (47 MBps) Copying: 381/1024 [MB] (47 MBps) Copying: 428/1024 [MB] (46 MBps) Copying: 475/1024 [MB] (47 MBps) Copying: 522/1024 [MB] (47 MBps) Copying: 571/1024 [MB] (48 MBps) Copying: 619/1024 [MB] (48 MBps) Copying: 668/1024 [MB] (48 MBps) Copying: 713/1024 [MB] (45 MBps) Copying: 764/1024 [MB] (50 MBps) Copying: 813/1024 [MB] (48 MBps) Copying: 863/1024 [MB] (50 MBps) Copying: 912/1024 [MB] (48 MBps) Copying: 963/1024 [MB] (51 MBps) Copying: 1013/1024 [MB] (49 MBps) Copying: 1024/1024 [MB] (average 48 MBps)[2024-09-28 23:40:23.763102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-09-28 23:40:23.763166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:35.627 [2024-09-28 23:40:23.763186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:35.627 [2024-09-28 23:40:23.763199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-09-28 23:40:23.763229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.627 [2024-09-28 23:40:23.767178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-09-28 23:40:23.767220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:35.627 [2024-09-28 23:40:23.767235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.927 ms 00:19:35.627 [2024-09-28 23:40:23.767252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-09-28 23:40:23.767603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-09-28 23:40:23.767626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:35.627 [2024-09-28 23:40:23.767640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:35.627 [2024-09-28 23:40:23.767652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-09-28 23:40:23.773905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-09-28 23:40:23.773944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:35.627 [2024-09-28 23:40:23.773958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.232 ms 00:19:35.627 [2024-09-28 23:40:23.773970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.627 [2024-09-28 23:40:23.780700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.627 [2024-09-28 23:40:23.780727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:35.627 [2024-09-28 23:40:23.780737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.684 ms 00:19:35.627 [2024-09-28 23:40:23.780745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.887 [2024-09-28 23:40:23.803989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.887 [2024-09-28 23:40:23.804132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:35.887 [2024-09-28 23:40:23.804150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.205 ms 00:19:35.887 [2024-09-28 23:40:23.804158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.887 [2024-09-28 23:40:23.817832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.887 [2024-09-28 23:40:23.817863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:35.887 [2024-09-28 23:40:23.817875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.644 ms 00:19:35.887 [2024-09-28 23:40:23.817884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.887 [2024-09-28 23:40:23.874620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.887 [2024-09-28 23:40:23.874783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:35.887 [2024-09-28 23:40:23.874810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.697 ms 00:19:35.887 [2024-09-28 23:40:23.874819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.887 [2024-09-28 23:40:23.898399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.887 [2024-09-28 23:40:23.898557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:35.887 [2024-09-28 23:40:23.898575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.561 ms 00:19:35.887 [2024-09-28 23:40:23.898583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.887 [2024-09-28 23:40:23.921366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.887 [2024-09-28 23:40:23.921397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:35.887 [2024-09-28 23:40:23.921407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.747 ms 00:19:35.888 [2024-09-28 23:40:23.921415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.888 [2024-09-28 23:40:23.943164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.888 [2024-09-28 23:40:23.943193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:35.888 [2024-09-28 23:40:23.943202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.719 ms 00:19:35.888 [2024-09-28 23:40:23.943210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.888 [2024-09-28 23:40:23.965490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.888 [2024-09-28 23:40:23.965620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:35.888 [2024-09-28 23:40:23.965635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.228 ms 00:19:35.888 [2024-09-28 23:40:23.965643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.888 [2024-09-28 23:40:23.965669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:35.888 [2024-09-28 23:40:23.965682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:19:35.888 [2024-09-28 23:40:23.965692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.965997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:35.888 [2024-09-28 23:40:23.966208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:35.889 [2024-09-28 23:40:23.966424] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:35.889 [2024-09-28 23:40:23.966432] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7631990-80f8-4fde-90d3-f206050c63ad 00:19:35.889 [2024-09-28 23:40:23.966444] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:19:35.889 [2024-09-28 23:40:23.966452] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 10176 00:19:35.889 [2024-09-28 23:40:23.966459] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 9216 00:19:35.889 [2024-09-28 23:40:23.966466] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1042 00:19:35.889 [2024-09-28 23:40:23.966473] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:35.889 [2024-09-28 23:40:23.966481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:35.889 [2024-09-28 23:40:23.966488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:35.889 [2024-09-28 23:40:23.966494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:35.889 [2024-09-28 23:40:23.966500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:35.889 [2024-09-28 23:40:23.966517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.889 [2024-09-28 23:40:23.966525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:35.889 [2024-09-28 23:40:23.966539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:19:35.889 [2024-09-28 23:40:23.966546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.889 [2024-09-28 23:40:23.978921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.889 [2024-09-28 23:40:23.978952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:35.889 [2024-09-28 23:40:23.978962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.343 ms 00:19:35.889 [2024-09-28 23:40:23.978969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.889 [2024-09-28 23:40:23.979299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.889 [2024-09-28 23:40:23.979317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:35.889 [2024-09-28 23:40:23.979327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:19:35.889 [2024-09-28 23:40:23.979335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.889 [2024-09-28 23:40:24.006879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.889 [2024-09-28 23:40:24.006912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.889 [2024-09-28 23:40:24.006922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.889 [2024-09-28 23:40:24.006930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.889 [2024-09-28 23:40:24.006984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.889 [2024-09-28 23:40:24.006992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.889 [2024-09-28 23:40:24.007000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.889 [2024-09-28 23:40:24.007011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.889 [2024-09-28 23:40:24.007058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.889 [2024-09-28 23:40:24.007067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.889 [2024-09-28 23:40:24.007075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.889 [2024-09-28 23:40:24.007082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.889 [2024-09-28 23:40:24.007096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.889 [2024-09-28 23:40:24.007104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.889 [2024-09-28 23:40:24.007111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.889 [2024-09-28 23:40:24.007118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.148 [2024-09-28 23:40:24.082674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.148 [2024-09-28 23:40:24.082729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.148 [2024-09-28 23:40:24.082745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.148 [2024-09-28 23:40:24.082758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.148 [2024-09-28 23:40:24.144207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.148 [2024-09-28 23:40:24.144250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.148 [2024-09-28 23:40:24.144261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.148 [2024-09-28 23:40:24.144272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.148 [2024-09-28 23:40:24.144337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.148 [2024-09-28 23:40:24.144346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.148 [2024-09-28 23:40:24.144354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.148 [2024-09-28 23:40:24.144361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.148 [2024-09-28 23:40:24.144393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.148 [2024-09-28 23:40:24.144401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.148 [2024-09-28 23:40:24.144408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.148 [2024-09-28 23:40:24.144416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.148 [2024-09-28 23:40:24.144497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.148 [2024-09-28 23:40:24.144506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.148 [2024-09-28 23:40:24.144537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.148 [2024-09-28 23:40:24.144544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.148 [2024-09-28 23:40:24.144570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.148 [2024-09-28 23:40:24.144578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:36.148 [2024-09-28 23:40:24.144586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.148 [2024-09-28 23:40:24.144593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.148 [2024-09-28 23:40:24.144633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.148 [2024-09-28 23:40:24.144642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.148 [2024-09-28 23:40:24.144649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.148 [2024-09-28 23:40:24.144656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.148 [2024-09-28 23:40:24.144694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.148 [2024-09-28 23:40:24.144704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.148 [2024-09-28 23:40:24.144711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.148 [2024-09-28 23:40:24.144719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.149 [2024-09-28 23:40:24.144824] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.707 ms, result 0 00:19:37.084 00:19:37.084 00:19:37.084 23:40:24 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:38.987 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:38.987 23:40:26 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:19:38.987 23:40:26 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:19:38.987 23:40:26 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:38.987 23:40:27 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:38.987 23:40:27 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:38.987 Process with pid 74726 is not found 00:19:38.987 Remove shared memory files 00:19:38.987 23:40:27 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74726 00:19:38.987 23:40:27 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74726 ']' 00:19:38.987 23:40:27 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74726 00:19:38.987 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74726) - No such process 00:19:38.987 23:40:27 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74726 is not found' 00:19:38.987 23:40:27 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:19:38.987 23:40:27 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:38.987 23:40:27 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:19:38.987 23:40:27 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:19:38.987 23:40:27 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:19:38.987 23:40:27 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:38.987 23:40:27 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:19:38.987 ************************************ 00:19:38.987 END TEST ftl_restore 00:19:38.987 ************************************ 00:19:38.987 00:19:38.987 real 2m10.117s 00:19:38.987 user 1m59.793s 00:19:38.987 sys 0m11.178s 00:19:38.987 23:40:27 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.987 23:40:27 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:38.987 23:40:27 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:19:38.987 23:40:27 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:38.987 23:40:27 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.987 23:40:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:38.987 ************************************ 00:19:38.987 START TEST ftl_dirty_shutdown 00:19:38.987 ************************************ 00:19:38.987 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:19:39.247 * Looking for test storage... 00:19:39.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:39.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.247 --rc genhtml_branch_coverage=1 00:19:39.247 --rc genhtml_function_coverage=1 00:19:39.247 --rc genhtml_legend=1 00:19:39.247 --rc geninfo_all_blocks=1 00:19:39.247 --rc geninfo_unexecuted_blocks=1 00:19:39.247 00:19:39.247 ' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:39.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.247 --rc genhtml_branch_coverage=1 00:19:39.247 --rc genhtml_function_coverage=1 00:19:39.247 --rc genhtml_legend=1 00:19:39.247 --rc geninfo_all_blocks=1 00:19:39.247 --rc geninfo_unexecuted_blocks=1 00:19:39.247 00:19:39.247 ' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:39.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.247 --rc genhtml_branch_coverage=1 00:19:39.247 --rc genhtml_function_coverage=1 00:19:39.247 --rc genhtml_legend=1 00:19:39.247 --rc geninfo_all_blocks=1 00:19:39.247 --rc geninfo_unexecuted_blocks=1 00:19:39.247 00:19:39.247 ' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:39.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.247 --rc genhtml_branch_coverage=1 00:19:39.247 --rc genhtml_function_coverage=1 00:19:39.247 --rc genhtml_legend=1 00:19:39.247 --rc geninfo_all_blocks=1 00:19:39.247 --rc geninfo_unexecuted_blocks=1 00:19:39.247 00:19:39.247 ' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:39.247 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=76149 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76149 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 76149 ']' 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.248 23:40:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:39.248 [2024-09-28 23:40:27.340868] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:39.248 [2024-09-28 23:40:27.341161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76149 ] 00:19:39.507 [2024-09-28 23:40:27.488370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.507 [2024-09-28 23:40:27.664614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:40.443 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:40.701 { 00:19:40.701 "name": "nvme0n1", 00:19:40.701 "aliases": [ 00:19:40.701 "2bf7b5bb-11b7-4012-9b54-49c4f21e0973" 00:19:40.701 ], 00:19:40.701 "product_name": "NVMe disk", 00:19:40.701 "block_size": 4096, 00:19:40.701 "num_blocks": 1310720, 00:19:40.701 "uuid": "2bf7b5bb-11b7-4012-9b54-49c4f21e0973", 00:19:40.701 "numa_id": -1, 00:19:40.701 "assigned_rate_limits": { 00:19:40.701 "rw_ios_per_sec": 0, 00:19:40.701 "rw_mbytes_per_sec": 0, 00:19:40.701 "r_mbytes_per_sec": 0, 00:19:40.701 "w_mbytes_per_sec": 0 00:19:40.701 }, 00:19:40.701 "claimed": true, 00:19:40.701 "claim_type": "read_many_write_one", 00:19:40.701 "zoned": false, 00:19:40.701 "supported_io_types": { 00:19:40.701 "read": true, 00:19:40.701 "write": true, 00:19:40.701 "unmap": true, 00:19:40.701 "flush": true, 00:19:40.701 "reset": true, 00:19:40.701 "nvme_admin": true, 00:19:40.701 "nvme_io": true, 00:19:40.701 "nvme_io_md": false, 00:19:40.701 "write_zeroes": true, 00:19:40.701 "zcopy": false, 00:19:40.701 "get_zone_info": false, 00:19:40.701 "zone_management": false, 00:19:40.701 "zone_append": false, 00:19:40.701 "compare": true, 00:19:40.701 "compare_and_write": false, 00:19:40.701 "abort": true, 00:19:40.701 "seek_hole": false, 00:19:40.701 "seek_data": false, 00:19:40.701 "copy": true, 00:19:40.701 "nvme_iov_md": false 00:19:40.701 }, 00:19:40.701 "driver_specific": { 00:19:40.701 "nvme": [ 00:19:40.701 { 00:19:40.701 "pci_address": "0000:00:11.0", 00:19:40.701 "trid": { 00:19:40.701 "trtype": "PCIe", 00:19:40.701 "traddr": "0000:00:11.0" 00:19:40.701 }, 00:19:40.701 "ctrlr_data": { 00:19:40.701 "cntlid": 0, 00:19:40.701 "vendor_id": "0x1b36", 00:19:40.701 "model_number": "QEMU NVMe Ctrl", 00:19:40.701 "serial_number": "12341", 00:19:40.701 "firmware_revision": "8.0.0", 00:19:40.701 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:40.701 "oacs": { 00:19:40.701 "security": 0, 00:19:40.701 "format": 1, 00:19:40.701 "firmware": 0, 00:19:40.701 "ns_manage": 1 00:19:40.701 }, 00:19:40.701 "multi_ctrlr": false, 00:19:40.701 "ana_reporting": false 00:19:40.701 }, 00:19:40.701 "vs": { 00:19:40.701 "nvme_version": "1.4" 00:19:40.701 }, 00:19:40.701 "ns_data": { 00:19:40.701 "id": 1, 00:19:40.701 "can_share": false 00:19:40.701 } 00:19:40.701 } 00:19:40.701 ], 00:19:40.701 "mp_policy": "active_passive" 00:19:40.701 } 00:19:40.701 } 00:19:40.701 ]' 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:40.701 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:40.960 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=aab713cf-273a-4575-96ad-82535596ed16 00:19:40.960 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:19:40.960 23:40:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aab713cf-273a-4575-96ad-82535596ed16 00:19:41.218 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:41.218 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=b036f953-b197-47fd-ba88-1f61e199ac27 00:19:41.218 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b036f953-b197-47fd-ba88-1f61e199ac27 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:41.477 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:41.736 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:41.736 { 00:19:41.736 "name": "ffbf9a46-80e7-451b-82d0-f3654e9a9588", 00:19:41.736 "aliases": [ 00:19:41.736 "lvs/nvme0n1p0" 00:19:41.736 ], 00:19:41.736 "product_name": "Logical Volume", 00:19:41.736 "block_size": 4096, 00:19:41.736 "num_blocks": 26476544, 00:19:41.736 "uuid": "ffbf9a46-80e7-451b-82d0-f3654e9a9588", 00:19:41.736 "assigned_rate_limits": { 00:19:41.736 "rw_ios_per_sec": 0, 00:19:41.736 "rw_mbytes_per_sec": 0, 00:19:41.736 "r_mbytes_per_sec": 0, 00:19:41.736 "w_mbytes_per_sec": 0 00:19:41.736 }, 00:19:41.736 "claimed": false, 00:19:41.736 "zoned": false, 00:19:41.736 "supported_io_types": { 00:19:41.736 "read": true, 00:19:41.736 "write": true, 00:19:41.736 "unmap": true, 00:19:41.736 "flush": false, 00:19:41.736 "reset": true, 00:19:41.736 "nvme_admin": false, 00:19:41.736 "nvme_io": false, 00:19:41.736 "nvme_io_md": false, 00:19:41.736 "write_zeroes": true, 00:19:41.736 "zcopy": false, 00:19:41.736 "get_zone_info": false, 00:19:41.736 "zone_management": false, 00:19:41.736 "zone_append": false, 00:19:41.736 "compare": false, 00:19:41.736 "compare_and_write": false, 00:19:41.736 "abort": false, 00:19:41.736 "seek_hole": true, 00:19:41.736 "seek_data": true, 00:19:41.736 "copy": false, 00:19:41.736 "nvme_iov_md": false 00:19:41.736 }, 00:19:41.736 "driver_specific": { 00:19:41.736 "lvol": { 00:19:41.736 "lvol_store_uuid": "b036f953-b197-47fd-ba88-1f61e199ac27", 00:19:41.736 "base_bdev": "nvme0n1", 00:19:41.736 "thin_provision": true, 00:19:41.736 "num_allocated_clusters": 0, 00:19:41.736 "snapshot": false, 00:19:41.736 "clone": false, 00:19:41.736 "esnap_clone": false 00:19:41.736 } 00:19:41.736 } 00:19:41.736 } 00:19:41.736 ]' 00:19:41.736 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:41.736 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:41.736 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:41.736 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:41.736 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:41.736 23:40:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:19:41.736 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:19:41.736 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:19:41.736 23:40:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:41.994 23:40:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:41.994 23:40:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:41.994 23:40:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:41.994 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:41.994 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:41.994 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:41.994 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:41.994 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:42.252 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:42.252 { 00:19:42.252 "name": "ffbf9a46-80e7-451b-82d0-f3654e9a9588", 00:19:42.252 "aliases": [ 00:19:42.252 "lvs/nvme0n1p0" 00:19:42.252 ], 00:19:42.252 "product_name": "Logical Volume", 00:19:42.252 "block_size": 4096, 00:19:42.252 "num_blocks": 26476544, 00:19:42.252 "uuid": "ffbf9a46-80e7-451b-82d0-f3654e9a9588", 00:19:42.252 "assigned_rate_limits": { 00:19:42.252 "rw_ios_per_sec": 0, 00:19:42.252 "rw_mbytes_per_sec": 0, 00:19:42.252 "r_mbytes_per_sec": 0, 00:19:42.252 "w_mbytes_per_sec": 0 00:19:42.252 }, 00:19:42.252 "claimed": false, 00:19:42.252 "zoned": false, 00:19:42.252 "supported_io_types": { 00:19:42.252 "read": true, 00:19:42.252 "write": true, 00:19:42.252 "unmap": true, 00:19:42.252 "flush": false, 00:19:42.252 "reset": true, 00:19:42.252 "nvme_admin": false, 00:19:42.252 "nvme_io": false, 00:19:42.252 "nvme_io_md": false, 00:19:42.252 "write_zeroes": true, 00:19:42.252 "zcopy": false, 00:19:42.252 "get_zone_info": false, 00:19:42.252 "zone_management": false, 00:19:42.252 "zone_append": false, 00:19:42.252 "compare": false, 00:19:42.252 "compare_and_write": false, 00:19:42.252 "abort": false, 00:19:42.252 "seek_hole": true, 00:19:42.252 "seek_data": true, 00:19:42.252 "copy": false, 00:19:42.252 "nvme_iov_md": false 00:19:42.252 }, 00:19:42.252 "driver_specific": { 00:19:42.252 "lvol": { 00:19:42.252 "lvol_store_uuid": "b036f953-b197-47fd-ba88-1f61e199ac27", 00:19:42.252 "base_bdev": "nvme0n1", 00:19:42.252 "thin_provision": true, 00:19:42.252 "num_allocated_clusters": 0, 00:19:42.252 "snapshot": false, 00:19:42.252 "clone": false, 00:19:42.252 "esnap_clone": false 00:19:42.252 } 00:19:42.252 } 00:19:42.252 } 00:19:42.252 ]' 00:19:42.252 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:42.252 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:42.252 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:42.252 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:42.252 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:42.252 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:19:42.252 23:40:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:19:42.252 23:40:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:42.510 23:40:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:19:42.510 23:40:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:42.510 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:42.510 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:42.510 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:42.510 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:42.510 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ffbf9a46-80e7-451b-82d0-f3654e9a9588 00:19:42.769 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:42.769 { 00:19:42.769 "name": "ffbf9a46-80e7-451b-82d0-f3654e9a9588", 00:19:42.769 "aliases": [ 00:19:42.769 "lvs/nvme0n1p0" 00:19:42.769 ], 00:19:42.769 "product_name": "Logical Volume", 00:19:42.769 "block_size": 4096, 00:19:42.769 "num_blocks": 26476544, 00:19:42.769 "uuid": "ffbf9a46-80e7-451b-82d0-f3654e9a9588", 00:19:42.769 "assigned_rate_limits": { 00:19:42.769 "rw_ios_per_sec": 0, 00:19:42.769 "rw_mbytes_per_sec": 0, 00:19:42.769 "r_mbytes_per_sec": 0, 00:19:42.769 "w_mbytes_per_sec": 0 00:19:42.769 }, 00:19:42.769 "claimed": false, 00:19:42.769 "zoned": false, 00:19:42.769 "supported_io_types": { 00:19:42.769 "read": true, 00:19:42.769 "write": true, 00:19:42.769 "unmap": true, 00:19:42.769 "flush": false, 00:19:42.769 "reset": true, 00:19:42.769 "nvme_admin": false, 00:19:42.769 "nvme_io": false, 00:19:42.769 "nvme_io_md": false, 00:19:42.769 "write_zeroes": true, 00:19:42.769 "zcopy": false, 00:19:42.769 "get_zone_info": false, 00:19:42.769 "zone_management": false, 00:19:42.769 "zone_append": false, 00:19:42.769 "compare": false, 00:19:42.769 "compare_and_write": false, 00:19:42.769 "abort": false, 00:19:42.769 "seek_hole": true, 00:19:42.769 "seek_data": true, 00:19:42.769 "copy": false, 00:19:42.769 "nvme_iov_md": false 00:19:42.769 }, 00:19:42.769 "driver_specific": { 00:19:42.769 "lvol": { 00:19:42.769 "lvol_store_uuid": "b036f953-b197-47fd-ba88-1f61e199ac27", 00:19:42.769 "base_bdev": "nvme0n1", 00:19:42.769 "thin_provision": true, 00:19:42.769 "num_allocated_clusters": 0, 00:19:42.769 "snapshot": false, 00:19:42.769 "clone": false, 00:19:42.769 "esnap_clone": false 00:19:42.769 } 00:19:42.769 } 00:19:42.769 } 00:19:42.769 ]' 00:19:42.769 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:42.769 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:42.769 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:42.769 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:42.769 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:42.769 23:40:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:19:42.770 23:40:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:19:42.770 23:40:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ffbf9a46-80e7-451b-82d0-f3654e9a9588 --l2p_dram_limit 10' 00:19:42.770 23:40:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:19:42.770 23:40:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:19:42.770 23:40:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:42.770 23:40:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ffbf9a46-80e7-451b-82d0-f3654e9a9588 --l2p_dram_limit 10 -c nvc0n1p0 00:19:42.770 [2024-09-28 23:40:30.935264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.770 [2024-09-28 23:40:30.935406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:42.770 [2024-09-28 23:40:30.935458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:42.770 [2024-09-28 23:40:30.935477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.770 [2024-09-28 23:40:30.935546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.770 [2024-09-28 23:40:30.935567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.770 [2024-09-28 23:40:30.935586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:42.770 [2024-09-28 23:40:30.935655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.770 [2024-09-28 23:40:30.935694] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:42.770 [2024-09-28 23:40:30.936297] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:42.770 [2024-09-28 23:40:30.936375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.029 [2024-09-28 23:40:30.936409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.029 [2024-09-28 23:40:30.936429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:19:43.029 [2024-09-28 23:40:30.936446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.029 [2024-09-28 23:40:30.936575] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 19962993-5207-4b0b-af8a-a1f71d7fadb9 00:19:43.029 [2024-09-28 23:40:30.937530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.029 [2024-09-28 23:40:30.937615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:43.029 [2024-09-28 23:40:30.937659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:43.029 [2024-09-28 23:40:30.937678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.029 [2024-09-28 23:40:30.942351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.029 [2024-09-28 23:40:30.942441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.029 [2024-09-28 23:40:30.942483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.626 ms 00:19:43.029 [2024-09-28 23:40:30.942501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.029 [2024-09-28 23:40:30.942593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.029 [2024-09-28 23:40:30.942681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.029 [2024-09-28 23:40:30.942701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:43.029 [2024-09-28 23:40:30.942722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.029 [2024-09-28 23:40:30.942774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.029 [2024-09-28 23:40:30.942831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:43.029 [2024-09-28 23:40:30.942849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:43.029 [2024-09-28 23:40:30.942865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.029 [2024-09-28 23:40:30.942891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.029 [2024-09-28 23:40:30.945802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.029 [2024-09-28 23:40:30.945824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.029 [2024-09-28 23:40:30.945833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:19:43.029 [2024-09-28 23:40:30.945839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.029 [2024-09-28 23:40:30.945866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.029 [2024-09-28 23:40:30.945872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:43.029 [2024-09-28 23:40:30.945880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:43.029 [2024-09-28 23:40:30.945887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.029 [2024-09-28 23:40:30.945901] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:43.029 [2024-09-28 23:40:30.946004] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:43.029 [2024-09-28 23:40:30.946016] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:43.029 [2024-09-28 23:40:30.946024] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:43.029 [2024-09-28 23:40:30.946034] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946041] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946048] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:43.029 [2024-09-28 23:40:30.946054] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:43.029 [2024-09-28 23:40:30.946061] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:43.029 [2024-09-28 23:40:30.946066] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:43.029 [2024-09-28 23:40:30.946073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.029 [2024-09-28 23:40:30.946083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:43.029 [2024-09-28 23:40:30.946091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:19:43.029 [2024-09-28 23:40:30.946096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.029 [2024-09-28 23:40:30.946161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.029 [2024-09-28 23:40:30.946170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:43.029 [2024-09-28 23:40:30.946177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:43.029 [2024-09-28 23:40:30.946182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.029 [2024-09-28 23:40:30.946256] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:43.029 [2024-09-28 23:40:30.946263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:43.029 [2024-09-28 23:40:30.946271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:43.029 [2024-09-28 23:40:30.946288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:43.029 [2024-09-28 23:40:30.946307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.029 [2024-09-28 23:40:30.946317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:43.029 [2024-09-28 23:40:30.946323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:43.029 [2024-09-28 23:40:30.946329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.029 [2024-09-28 23:40:30.946334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:43.029 [2024-09-28 23:40:30.946342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:43.029 [2024-09-28 23:40:30.946346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:43.029 [2024-09-28 23:40:30.946359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:43.029 [2024-09-28 23:40:30.946378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:43.029 [2024-09-28 23:40:30.946396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:43.029 [2024-09-28 23:40:30.946413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:43.029 [2024-09-28 23:40:30.946429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:43.029 [2024-09-28 23:40:30.946447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.029 [2024-09-28 23:40:30.946458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:43.029 [2024-09-28 23:40:30.946463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:43.029 [2024-09-28 23:40:30.946470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.029 [2024-09-28 23:40:30.946475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:43.029 [2024-09-28 23:40:30.946481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:43.029 [2024-09-28 23:40:30.946485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:43.029 [2024-09-28 23:40:30.946496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:43.029 [2024-09-28 23:40:30.946502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946515] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:43.029 [2024-09-28 23:40:30.946523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:43.029 [2024-09-28 23:40:30.946530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.029 [2024-09-28 23:40:30.946543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:43.029 [2024-09-28 23:40:30.946552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:43.029 [2024-09-28 23:40:30.946557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:43.029 [2024-09-28 23:40:30.946563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:43.029 [2024-09-28 23:40:30.946569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:43.029 [2024-09-28 23:40:30.946576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:43.029 [2024-09-28 23:40:30.946584] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:43.029 [2024-09-28 23:40:30.946592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.029 [2024-09-28 23:40:30.946599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:43.030 [2024-09-28 23:40:30.946605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:43.030 [2024-09-28 23:40:30.946611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:43.030 [2024-09-28 23:40:30.946617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:43.030 [2024-09-28 23:40:30.946632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:43.030 [2024-09-28 23:40:30.946639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:43.030 [2024-09-28 23:40:30.946644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:43.030 [2024-09-28 23:40:30.946651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:43.030 [2024-09-28 23:40:30.946656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:43.030 [2024-09-28 23:40:30.946664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:43.030 [2024-09-28 23:40:30.946669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:43.030 [2024-09-28 23:40:30.946676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:43.030 [2024-09-28 23:40:30.946682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:43.030 [2024-09-28 23:40:30.946689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:43.030 [2024-09-28 23:40:30.946694] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:43.030 [2024-09-28 23:40:30.946702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.030 [2024-09-28 23:40:30.946708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:43.030 [2024-09-28 23:40:30.946716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:43.030 [2024-09-28 23:40:30.946721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:43.030 [2024-09-28 23:40:30.946728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:43.030 [2024-09-28 23:40:30.946734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.030 [2024-09-28 23:40:30.946741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:43.030 [2024-09-28 23:40:30.946746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:19:43.030 [2024-09-28 23:40:30.946753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.030 [2024-09-28 23:40:30.946796] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:43.030 [2024-09-28 23:40:30.946806] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:45.557 [2024-09-28 23:40:33.299799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.300033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:45.557 [2024-09-28 23:40:33.300054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2352.993 ms 00:19:45.557 [2024-09-28 23:40:33.300065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.325209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.325256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:45.557 [2024-09-28 23:40:33.325268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.937 ms 00:19:45.557 [2024-09-28 23:40:33.325278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.325407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.325420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:45.557 [2024-09-28 23:40:33.325428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:45.557 [2024-09-28 23:40:33.325441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.363313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.363363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:45.557 [2024-09-28 23:40:33.363379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.837 ms 00:19:45.557 [2024-09-28 23:40:33.363391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.363436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.363447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:45.557 [2024-09-28 23:40:33.363457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:45.557 [2024-09-28 23:40:33.363473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.363863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.363883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:45.557 [2024-09-28 23:40:33.363894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:19:45.557 [2024-09-28 23:40:33.363905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.364023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.364034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:45.557 [2024-09-28 23:40:33.364042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:45.557 [2024-09-28 23:40:33.364054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.377707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.377739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:45.557 [2024-09-28 23:40:33.377750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.634 ms 00:19:45.557 [2024-09-28 23:40:33.377759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.389004] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:45.557 [2024-09-28 23:40:33.391582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.391610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:45.557 [2024-09-28 23:40:33.391624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.749 ms 00:19:45.557 [2024-09-28 23:40:33.391632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.456240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.456282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:45.557 [2024-09-28 23:40:33.456299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.579 ms 00:19:45.557 [2024-09-28 23:40:33.456307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.456486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.456496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:45.557 [2024-09-28 23:40:33.456526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:19:45.557 [2024-09-28 23:40:33.456534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.479513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.479548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:45.557 [2024-09-28 23:40:33.479560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.929 ms 00:19:45.557 [2024-09-28 23:40:33.479568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.501816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.501848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:45.557 [2024-09-28 23:40:33.501862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.211 ms 00:19:45.557 [2024-09-28 23:40:33.501870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.502432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.502447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:45.557 [2024-09-28 23:40:33.502457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:19:45.557 [2024-09-28 23:40:33.502464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.571674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.571718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:45.557 [2024-09-28 23:40:33.571733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.173 ms 00:19:45.557 [2024-09-28 23:40:33.571743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.595364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.595527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:45.557 [2024-09-28 23:40:33.595548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.561 ms 00:19:45.557 [2024-09-28 23:40:33.595556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.618207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.618338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:45.557 [2024-09-28 23:40:33.618357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.624 ms 00:19:45.557 [2024-09-28 23:40:33.618364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.641226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.641387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:45.557 [2024-09-28 23:40:33.641406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.836 ms 00:19:45.557 [2024-09-28 23:40:33.641414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.641441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.641449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:45.557 [2024-09-28 23:40:33.641464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:45.557 [2024-09-28 23:40:33.641471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.641564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.557 [2024-09-28 23:40:33.641575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:45.557 [2024-09-28 23:40:33.641584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:45.557 [2024-09-28 23:40:33.641592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.557 [2024-09-28 23:40:33.642388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2706.718 ms, result 0 00:19:45.557 { 00:19:45.557 "name": "ftl0", 00:19:45.557 "uuid": "19962993-5207-4b0b-af8a-a1f71d7fadb9" 00:19:45.557 } 00:19:45.557 23:40:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:19:45.557 23:40:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:45.815 23:40:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:19:45.815 23:40:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:19:45.815 23:40:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:19:46.076 /dev/nbd0 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:19:46.076 1+0 records in 00:19:46.076 1+0 records out 00:19:46.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308592 s, 13.3 MB/s 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:19:46.076 23:40:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:19:46.076 [2024-09-28 23:40:34.165729] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:46.076 [2024-09-28 23:40:34.165848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76280 ] 00:19:46.342 [2024-09-28 23:40:34.315428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.342 [2024-09-28 23:40:34.491426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.558  Copying: 196/1024 [MB] (196 MBps) Copying: 437/1024 [MB] (241 MBps) Copying: 698/1024 [MB] (261 MBps) Copying: 949/1024 [MB] (250 MBps) Copying: 1024/1024 [MB] (average 238 MBps) 00:19:51.558 00:19:51.558 23:40:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:54.090 23:40:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:19:54.090 [2024-09-28 23:40:41.716834] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:54.090 [2024-09-28 23:40:41.717429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76356 ] 00:19:54.090 [2024-09-28 23:40:41.860498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.090 [2024-09-28 23:40:42.035390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.624  Copying: 30/1024 [MB] (30 MBps) Copying: 59/1024 [MB] (29 MBps) Copying: 88/1024 [MB] (28 MBps) Copying: 116/1024 [MB] (28 MBps) Copying: 148/1024 [MB] (31 MBps) Copying: 179/1024 [MB] (31 MBps) Copying: 215/1024 [MB] (35 MBps) Copying: 246/1024 [MB] (31 MBps) Copying: 277/1024 [MB] (30 MBps) Copying: 311/1024 [MB] (34 MBps) Copying: 346/1024 [MB] (34 MBps) Copying: 375/1024 [MB] (28 MBps) Copying: 405/1024 [MB] (30 MBps) Copying: 436/1024 [MB] (30 MBps) Copying: 465/1024 [MB] (29 MBps) Copying: 494/1024 [MB] (28 MBps) Copying: 523/1024 [MB] (29 MBps) Copying: 552/1024 [MB] (28 MBps) Copying: 582/1024 [MB] (29 MBps) Copying: 613/1024 [MB] (30 MBps) Copying: 647/1024 [MB] (34 MBps) Copying: 677/1024 [MB] (30 MBps) Copying: 707/1024 [MB] (29 MBps) Copying: 738/1024 [MB] (31 MBps) Copying: 768/1024 [MB] (30 MBps) Copying: 800/1024 [MB] (31 MBps) Copying: 829/1024 [MB] (29 MBps) Copying: 858/1024 [MB] (28 MBps) Copying: 888/1024 [MB] (29 MBps) Copying: 918/1024 [MB] (30 MBps) Copying: 947/1024 [MB] (29 MBps) Copying: 977/1024 [MB] (29 MBps) Copying: 1006/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 30 MBps) 00:20:28.624 00:20:28.624 23:41:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:20:28.624 23:41:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:20:28.624 23:41:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:28.883 [2024-09-28 23:41:16.826275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.883 [2024-09-28 23:41:16.826320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:28.883 [2024-09-28 23:41:16.826331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:28.883 [2024-09-28 23:41:16.826339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.883 [2024-09-28 23:41:16.826358] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.883 [2024-09-28 23:41:16.828462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.883 [2024-09-28 23:41:16.828487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:28.883 [2024-09-28 23:41:16.828497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.089 ms 00:20:28.883 [2024-09-28 23:41:16.828503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.830153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.884 [2024-09-28 23:41:16.830182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:28.884 [2024-09-28 23:41:16.830191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:20:28.884 [2024-09-28 23:41:16.830197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.843362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.884 [2024-09-28 23:41:16.843391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:28.884 [2024-09-28 23:41:16.843401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.149 ms 00:20:28.884 [2024-09-28 23:41:16.843407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.848306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.884 [2024-09-28 23:41:16.848425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:28.884 [2024-09-28 23:41:16.848443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.871 ms 00:20:28.884 [2024-09-28 23:41:16.848449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.866979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.884 [2024-09-28 23:41:16.867080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.884 [2024-09-28 23:41:16.867133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.469 ms 00:20:28.884 [2024-09-28 23:41:16.867152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.879537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.884 [2024-09-28 23:41:16.879637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.884 [2024-09-28 23:41:16.879688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.345 ms 00:20:28.884 [2024-09-28 23:41:16.879706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.879820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.884 [2024-09-28 23:41:16.879840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.884 [2024-09-28 23:41:16.879860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:28.884 [2024-09-28 23:41:16.879900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.897610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.884 [2024-09-28 23:41:16.897698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:28.884 [2024-09-28 23:41:16.897738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.683 ms 00:20:28.884 [2024-09-28 23:41:16.897754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.915022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.884 [2024-09-28 23:41:16.915101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:28.884 [2024-09-28 23:41:16.915138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.233 ms 00:20:28.884 [2024-09-28 23:41:16.915155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.932099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.884 [2024-09-28 23:41:16.932187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.884 [2024-09-28 23:41:16.932238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.907 ms 00:20:28.884 [2024-09-28 23:41:16.932255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.948836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.884 [2024-09-28 23:41:16.948918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.884 [2024-09-28 23:41:16.948957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.519 ms 00:20:28.884 [2024-09-28 23:41:16.948974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.884 [2024-09-28 23:41:16.949007] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.884 [2024-09-28 23:41:16.949028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.884 [2024-09-28 23:41:16.949826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.949849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.949872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.949921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.949945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.949968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.950955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.885 [2024-09-28 23:41:16.951957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.886 [2024-09-28 23:41:16.952313] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.886 [2024-09-28 23:41:16.952323] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 19962993-5207-4b0b-af8a-a1f71d7fadb9 00:20:28.886 [2024-09-28 23:41:16.952329] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.886 [2024-09-28 23:41:16.952337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.886 [2024-09-28 23:41:16.952343] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.886 [2024-09-28 23:41:16.952350] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.886 [2024-09-28 23:41:16.952355] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.886 [2024-09-28 23:41:16.952362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.886 [2024-09-28 23:41:16.952368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.886 [2024-09-28 23:41:16.952374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.886 [2024-09-28 23:41:16.952378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.886 [2024-09-28 23:41:16.952386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.886 [2024-09-28 23:41:16.952392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.886 [2024-09-28 23:41:16.952400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:20:28.886 [2024-09-28 23:41:16.952405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.886 [2024-09-28 23:41:16.962197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.886 [2024-09-28 23:41:16.962278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:28.886 [2024-09-28 23:41:16.962317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.762 ms 00:20:28.886 [2024-09-28 23:41:16.962334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.886 [2024-09-28 23:41:16.962633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.886 [2024-09-28 23:41:16.962691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:28.886 [2024-09-28 23:41:16.962738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:20:28.886 [2024-09-28 23:41:16.962758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.886 [2024-09-28 23:41:16.991503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.886 [2024-09-28 23:41:16.991605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.886 [2024-09-28 23:41:16.991645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.886 [2024-09-28 23:41:16.991662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.886 [2024-09-28 23:41:16.991719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.886 [2024-09-28 23:41:16.991735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.886 [2024-09-28 23:41:16.991770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.886 [2024-09-28 23:41:16.991787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.886 [2024-09-28 23:41:16.991850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.886 [2024-09-28 23:41:16.991907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.886 [2024-09-28 23:41:16.991951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.886 [2024-09-28 23:41:16.991966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.886 [2024-09-28 23:41:16.991993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.886 [2024-09-28 23:41:16.992009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.886 [2024-09-28 23:41:16.992024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.886 [2024-09-28 23:41:16.992039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.145 [2024-09-28 23:41:17.051520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.145 [2024-09-28 23:41:17.051660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.145 [2024-09-28 23:41:17.051701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.145 [2024-09-28 23:41:17.051719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.145 [2024-09-28 23:41:17.100165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.145 [2024-09-28 23:41:17.100302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.145 [2024-09-28 23:41:17.100342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.145 [2024-09-28 23:41:17.100361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.145 [2024-09-28 23:41:17.100458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.145 [2024-09-28 23:41:17.100478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.145 [2024-09-28 23:41:17.100494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.145 [2024-09-28 23:41:17.100524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.145 [2024-09-28 23:41:17.100577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.145 [2024-09-28 23:41:17.100645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.145 [2024-09-28 23:41:17.100662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.145 [2024-09-28 23:41:17.100676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.145 [2024-09-28 23:41:17.100760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.145 [2024-09-28 23:41:17.100784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.145 [2024-09-28 23:41:17.100801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.145 [2024-09-28 23:41:17.100815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.145 [2024-09-28 23:41:17.100885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.145 [2024-09-28 23:41:17.100927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:29.145 [2024-09-28 23:41:17.100964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.145 [2024-09-28 23:41:17.100981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.145 [2024-09-28 23:41:17.101023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.145 [2024-09-28 23:41:17.101040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.145 [2024-09-28 23:41:17.101055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.145 [2024-09-28 23:41:17.101113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.145 [2024-09-28 23:41:17.101165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.145 [2024-09-28 23:41:17.101184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.145 [2024-09-28 23:41:17.101200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.145 [2024-09-28 23:41:17.101215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.145 [2024-09-28 23:41:17.101327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.023 ms, result 0 00:20:29.145 true 00:20:29.145 23:41:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 76149 00:20:29.145 23:41:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76149 00:20:29.145 23:41:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:20:29.145 [2024-09-28 23:41:17.196202] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:29.145 [2024-09-28 23:41:17.196539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76736 ] 00:20:29.402 [2024-09-28 23:41:17.347647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.403 [2024-09-28 23:41:17.488247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.150  Copying: 259/1024 [MB] (259 MBps) Copying: 518/1024 [MB] (259 MBps) Copying: 777/1024 [MB] (259 MBps) Copying: 1024/1024 [MB] (average 258 MBps) 00:20:34.151 00:20:34.151 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76149 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:20:34.151 23:41:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:34.409 [2024-09-28 23:41:22.324179] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:34.409 [2024-09-28 23:41:22.324290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76789 ] 00:20:34.409 [2024-09-28 23:41:22.473020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.667 [2024-09-28 23:41:22.611061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.667 [2024-09-28 23:41:22.817233] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:34.667 [2024-09-28 23:41:22.817282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:34.927 [2024-09-28 23:41:22.880026] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:34.927 [2024-09-28 23:41:22.880427] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:34.927 [2024-09-28 23:41:22.880647] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:34.927 [2024-09-28 23:41:23.060465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.060500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:34.927 [2024-09-28 23:41:23.060524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:34.927 [2024-09-28 23:41:23.060531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.060568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.060575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:34.927 [2024-09-28 23:41:23.060582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:34.927 [2024-09-28 23:41:23.060589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.060602] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:34.927 [2024-09-28 23:41:23.061140] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:34.927 [2024-09-28 23:41:23.061159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.061164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:34.927 [2024-09-28 23:41:23.061171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:20:34.927 [2024-09-28 23:41:23.061177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.062148] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:34.927 [2024-09-28 23:41:23.071647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.071675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:34.927 [2024-09-28 23:41:23.071683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.501 ms 00:20:34.927 [2024-09-28 23:41:23.071689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.071730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.071740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:34.927 [2024-09-28 23:41:23.071746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:34.927 [2024-09-28 23:41:23.071751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.075972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.075997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:34.927 [2024-09-28 23:41:23.076004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.180 ms 00:20:34.927 [2024-09-28 23:41:23.076010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.076062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.076069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:34.927 [2024-09-28 23:41:23.076076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:34.927 [2024-09-28 23:41:23.076081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.076112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.076119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:34.927 [2024-09-28 23:41:23.076125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:34.927 [2024-09-28 23:41:23.076131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.076144] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:34.927 [2024-09-28 23:41:23.078688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.078880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:34.927 [2024-09-28 23:41:23.078892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:20:34.927 [2024-09-28 23:41:23.078898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.078928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.078935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:34.927 [2024-09-28 23:41:23.078942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:34.927 [2024-09-28 23:41:23.078947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.078960] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:34.927 [2024-09-28 23:41:23.078974] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:34.927 [2024-09-28 23:41:23.079000] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:34.927 [2024-09-28 23:41:23.079013] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:34.927 [2024-09-28 23:41:23.079092] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:34.927 [2024-09-28 23:41:23.079100] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:34.927 [2024-09-28 23:41:23.079108] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:34.927 [2024-09-28 23:41:23.079115] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:34.927 [2024-09-28 23:41:23.079122] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:34.927 [2024-09-28 23:41:23.079128] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:34.927 [2024-09-28 23:41:23.079134] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:34.927 [2024-09-28 23:41:23.079139] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:34.927 [2024-09-28 23:41:23.079144] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:34.927 [2024-09-28 23:41:23.079150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.079158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:34.927 [2024-09-28 23:41:23.079164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:20:34.927 [2024-09-28 23:41:23.079170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.079231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.927 [2024-09-28 23:41:23.079238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:34.927 [2024-09-28 23:41:23.079243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:34.927 [2024-09-28 23:41:23.079248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.927 [2024-09-28 23:41:23.079322] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:34.927 [2024-09-28 23:41:23.079330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:34.927 [2024-09-28 23:41:23.079340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:34.927 [2024-09-28 23:41:23.079346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.927 [2024-09-28 23:41:23.079352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:34.927 [2024-09-28 23:41:23.079357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:34.927 [2024-09-28 23:41:23.079362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:34.927 [2024-09-28 23:41:23.079367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:34.927 [2024-09-28 23:41:23.079373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:34.927 [2024-09-28 23:41:23.079382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:34.927 [2024-09-28 23:41:23.079388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:34.927 [2024-09-28 23:41:23.079393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:34.927 [2024-09-28 23:41:23.079398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:34.927 [2024-09-28 23:41:23.079403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:34.928 [2024-09-28 23:41:23.079409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:34.928 [2024-09-28 23:41:23.079414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.928 [2024-09-28 23:41:23.079419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:34.928 [2024-09-28 23:41:23.079424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:34.928 [2024-09-28 23:41:23.079429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.928 [2024-09-28 23:41:23.079434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:34.928 [2024-09-28 23:41:23.079439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:34.928 [2024-09-28 23:41:23.079444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.928 [2024-09-28 23:41:23.079449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:34.928 [2024-09-28 23:41:23.079454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:34.928 [2024-09-28 23:41:23.079459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.928 [2024-09-28 23:41:23.079463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:34.928 [2024-09-28 23:41:23.079469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:34.928 [2024-09-28 23:41:23.079473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.928 [2024-09-28 23:41:23.079478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:34.928 [2024-09-28 23:41:23.079483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:34.928 [2024-09-28 23:41:23.079488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:34.928 [2024-09-28 23:41:23.079493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:34.928 [2024-09-28 23:41:23.079498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:34.928 [2024-09-28 23:41:23.079503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:34.928 [2024-09-28 23:41:23.079521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:34.928 [2024-09-28 23:41:23.079527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:34.928 [2024-09-28 23:41:23.079532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:34.928 [2024-09-28 23:41:23.079537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:34.928 [2024-09-28 23:41:23.079542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:34.928 [2024-09-28 23:41:23.079547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.928 [2024-09-28 23:41:23.079552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:34.928 [2024-09-28 23:41:23.079557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:34.928 [2024-09-28 23:41:23.079562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.928 [2024-09-28 23:41:23.079567] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:34.928 [2024-09-28 23:41:23.079574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:34.928 [2024-09-28 23:41:23.079580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:34.928 [2024-09-28 23:41:23.079586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:34.928 [2024-09-28 23:41:23.079598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:34.928 [2024-09-28 23:41:23.079604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:34.928 [2024-09-28 23:41:23.079609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:34.928 [2024-09-28 23:41:23.079614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:34.928 [2024-09-28 23:41:23.079620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:34.928 [2024-09-28 23:41:23.079624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:34.928 [2024-09-28 23:41:23.079630] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:34.928 [2024-09-28 23:41:23.079638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:34.928 [2024-09-28 23:41:23.079644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:34.928 [2024-09-28 23:41:23.079649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:34.928 [2024-09-28 23:41:23.079654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:34.928 [2024-09-28 23:41:23.079659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:34.928 [2024-09-28 23:41:23.079665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:34.928 [2024-09-28 23:41:23.079670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:34.928 [2024-09-28 23:41:23.079675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:34.928 [2024-09-28 23:41:23.079680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:34.928 [2024-09-28 23:41:23.079686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:34.928 [2024-09-28 23:41:23.079691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:34.928 [2024-09-28 23:41:23.079697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:34.928 [2024-09-28 23:41:23.079702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:34.928 [2024-09-28 23:41:23.079707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:34.928 [2024-09-28 23:41:23.079712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:34.928 [2024-09-28 23:41:23.079717] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:34.928 [2024-09-28 23:41:23.079723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:34.928 [2024-09-28 23:41:23.079731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:34.928 [2024-09-28 23:41:23.079737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:34.928 [2024-09-28 23:41:23.079742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:34.928 [2024-09-28 23:41:23.079747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:34.928 [2024-09-28 23:41:23.079753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.928 [2024-09-28 23:41:23.079758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:34.928 [2024-09-28 23:41:23.079764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:20:34.928 [2024-09-28 23:41:23.079770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.117648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.117679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.187 [2024-09-28 23:41:23.117688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.846 ms 00:20:35.187 [2024-09-28 23:41:23.117693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.117762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.117768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:35.187 [2024-09-28 23:41:23.117775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:35.187 [2024-09-28 23:41:23.117781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.141235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.141261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.187 [2024-09-28 23:41:23.141269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.410 ms 00:20:35.187 [2024-09-28 23:41:23.141275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.141298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.141305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.187 [2024-09-28 23:41:23.141311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:35.187 [2024-09-28 23:41:23.141317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.141632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.141644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.187 [2024-09-28 23:41:23.141651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:35.187 [2024-09-28 23:41:23.141657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.141754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.141763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.187 [2024-09-28 23:41:23.141769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:35.187 [2024-09-28 23:41:23.141776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.151567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.151680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.187 [2024-09-28 23:41:23.151692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.775 ms 00:20:35.187 [2024-09-28 23:41:23.151698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.161433] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:35.187 [2024-09-28 23:41:23.161462] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:35.187 [2024-09-28 23:41:23.161470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.161477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:35.187 [2024-09-28 23:41:23.161483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.703 ms 00:20:35.187 [2024-09-28 23:41:23.161488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.180016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.180046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:35.187 [2024-09-28 23:41:23.180058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.488 ms 00:20:35.187 [2024-09-28 23:41:23.180065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.188857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.188882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:35.187 [2024-09-28 23:41:23.188889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.763 ms 00:20:35.187 [2024-09-28 23:41:23.188895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.197459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.197483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:35.187 [2024-09-28 23:41:23.197491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.540 ms 00:20:35.187 [2024-09-28 23:41:23.197496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.197942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.197965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:35.187 [2024-09-28 23:41:23.197973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:20:35.187 [2024-09-28 23:41:23.197978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.241321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.241358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:35.187 [2024-09-28 23:41:23.241368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.328 ms 00:20:35.187 [2024-09-28 23:41:23.241374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.249130] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:35.187 [2024-09-28 23:41:23.250818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.250841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:35.187 [2024-09-28 23:41:23.250849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.410 ms 00:20:35.187 [2024-09-28 23:41:23.250856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.250904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.250913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:35.187 [2024-09-28 23:41:23.250920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:35.187 [2024-09-28 23:41:23.250926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.250968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.250975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:35.187 [2024-09-28 23:41:23.250982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:35.187 [2024-09-28 23:41:23.250988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.251003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.251009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:35.187 [2024-09-28 23:41:23.251015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:35.187 [2024-09-28 23:41:23.251021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.251046] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:35.187 [2024-09-28 23:41:23.251054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.251061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:35.187 [2024-09-28 23:41:23.251068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:35.187 [2024-09-28 23:41:23.251073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.268699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.268724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:35.187 [2024-09-28 23:41:23.268732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.613 ms 00:20:35.187 [2024-09-28 23:41:23.268738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.268793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.187 [2024-09-28 23:41:23.268801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:35.187 [2024-09-28 23:41:23.268807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:35.187 [2024-09-28 23:41:23.268813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.187 [2024-09-28 23:41:23.269547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 208.741 ms, result 0 00:21:00.439  Copying: 31/1024 [MB] (31 MBps) Copying: 52/1024 [MB] (21 MBps) Copying: 80/1024 [MB] (28 MBps) Copying: 126/1024 [MB] (45 MBps) Copying: 170/1024 [MB] (44 MBps) Copying: 213/1024 [MB] (42 MBps) Copying: 257/1024 [MB] (43 MBps) Copying: 300/1024 [MB] (43 MBps) Copying: 335/1024 [MB] (35 MBps) Copying: 379/1024 [MB] (43 MBps) Copying: 422/1024 [MB] (43 MBps) Copying: 467/1024 [MB] (44 MBps) Copying: 516/1024 [MB] (49 MBps) Copying: 561/1024 [MB] (44 MBps) Copying: 612/1024 [MB] (51 MBps) Copying: 655/1024 [MB] (42 MBps) Copying: 700/1024 [MB] (45 MBps) Copying: 745/1024 [MB] (44 MBps) Copying: 790/1024 [MB] (45 MBps) Copying: 834/1024 [MB] (43 MBps) Copying: 879/1024 [MB] (45 MBps) Copying: 925/1024 [MB] (45 MBps) Copying: 969/1024 [MB] (43 MBps) Copying: 1013/1024 [MB] (43 MBps) Copying: 1023/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 40 MBps)[2024-09-28 23:41:48.571828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.439 [2024-09-28 23:41:48.571877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:00.439 [2024-09-28 23:41:48.571891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:00.439 [2024-09-28 23:41:48.571900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.439 [2024-09-28 23:41:48.573093] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:00.439 [2024-09-28 23:41:48.577941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.439 [2024-09-28 23:41:48.577976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:00.439 [2024-09-28 23:41:48.577988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.789 ms 00:21:00.439 [2024-09-28 23:41:48.577997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.439 [2024-09-28 23:41:48.589968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.439 [2024-09-28 23:41:48.590010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:00.439 [2024-09-28 23:41:48.590022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.915 ms 00:21:00.439 [2024-09-28 23:41:48.590031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.699 [2024-09-28 23:41:48.608614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.699 [2024-09-28 23:41:48.608651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:00.699 [2024-09-28 23:41:48.608661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.567 ms 00:21:00.699 [2024-09-28 23:41:48.608669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.699 [2024-09-28 23:41:48.614824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.699 [2024-09-28 23:41:48.614850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:00.699 [2024-09-28 23:41:48.614860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.129 ms 00:21:00.699 [2024-09-28 23:41:48.614867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.699 [2024-09-28 23:41:48.637995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.699 [2024-09-28 23:41:48.638133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:00.699 [2024-09-28 23:41:48.638149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.090 ms 00:21:00.699 [2024-09-28 23:41:48.638157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.699 [2024-09-28 23:41:48.652030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.699 [2024-09-28 23:41:48.652060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:00.699 [2024-09-28 23:41:48.652075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.845 ms 00:21:00.699 [2024-09-28 23:41:48.652083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.699 [2024-09-28 23:41:48.707816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.699 [2024-09-28 23:41:48.707957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:00.699 [2024-09-28 23:41:48.707973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.700 ms 00:21:00.699 [2024-09-28 23:41:48.707981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.699 [2024-09-28 23:41:48.731245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.699 [2024-09-28 23:41:48.731368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:00.699 [2024-09-28 23:41:48.731383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.247 ms 00:21:00.699 [2024-09-28 23:41:48.731391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.699 [2024-09-28 23:41:48.754313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.699 [2024-09-28 23:41:48.754466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:00.699 [2024-09-28 23:41:48.754480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.895 ms 00:21:00.699 [2024-09-28 23:41:48.754488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.699 [2024-09-28 23:41:48.777022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.699 [2024-09-28 23:41:48.777135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.699 [2024-09-28 23:41:48.777149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.497 ms 00:21:00.699 [2024-09-28 23:41:48.777157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.699 [2024-09-28 23:41:48.798800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.699 [2024-09-28 23:41:48.798829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.699 [2024-09-28 23:41:48.798839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.581 ms 00:21:00.699 [2024-09-28 23:41:48.798846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.699 [2024-09-28 23:41:48.798875] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.699 [2024-09-28 23:41:48.798888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125952 / 261120 wr_cnt: 1 state: open 00:21:00.699 [2024-09-28 23:41:48.798897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.798994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.799001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.799008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.799016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.799023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.799030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.699 [2024-09-28 23:41:48.799038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.700 [2024-09-28 23:41:48.799665] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.700 [2024-09-28 23:41:48.799673] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 19962993-5207-4b0b-af8a-a1f71d7fadb9 00:21:00.700 [2024-09-28 23:41:48.799681] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125952 00:21:00.700 [2024-09-28 23:41:48.799689] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126912 00:21:00.700 [2024-09-28 23:41:48.799695] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125952 00:21:00.700 [2024-09-28 23:41:48.799703] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:21:00.701 [2024-09-28 23:41:48.799710] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.701 [2024-09-28 23:41:48.799717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.701 [2024-09-28 23:41:48.799730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.701 [2024-09-28 23:41:48.799737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.701 [2024-09-28 23:41:48.799743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.701 [2024-09-28 23:41:48.799750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.701 [2024-09-28 23:41:48.799760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.701 [2024-09-28 23:41:48.799784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:21:00.701 [2024-09-28 23:41:48.799791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.701 [2024-09-28 23:41:48.811997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.701 [2024-09-28 23:41:48.812027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.701 [2024-09-28 23:41:48.812037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.191 ms 00:21:00.701 [2024-09-28 23:41:48.812045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.701 [2024-09-28 23:41:48.812378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.701 [2024-09-28 23:41:48.812393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.701 [2024-09-28 23:41:48.812401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:21:00.701 [2024-09-28 23:41:48.812408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.701 [2024-09-28 23:41:48.840224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.701 [2024-09-28 23:41:48.840256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.701 [2024-09-28 23:41:48.840266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.701 [2024-09-28 23:41:48.840276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.701 [2024-09-28 23:41:48.840330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.701 [2024-09-28 23:41:48.840338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.701 [2024-09-28 23:41:48.840345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.701 [2024-09-28 23:41:48.840353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.701 [2024-09-28 23:41:48.840404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.701 [2024-09-28 23:41:48.840413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.701 [2024-09-28 23:41:48.840420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.701 [2024-09-28 23:41:48.840428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.701 [2024-09-28 23:41:48.840444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.701 [2024-09-28 23:41:48.840451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.701 [2024-09-28 23:41:48.840459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.701 [2024-09-28 23:41:48.840465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.960 [2024-09-28 23:41:48.915070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.960 [2024-09-28 23:41:48.915252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.960 [2024-09-28 23:41:48.915270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.960 [2024-09-28 23:41:48.915279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.960 [2024-09-28 23:41:48.977212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.960 [2024-09-28 23:41:48.977359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.960 [2024-09-28 23:41:48.977407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.960 [2024-09-28 23:41:48.977429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.960 [2024-09-28 23:41:48.977524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.960 [2024-09-28 23:41:48.977550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.960 [2024-09-28 23:41:48.977570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.960 [2024-09-28 23:41:48.977588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.960 [2024-09-28 23:41:48.977630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.960 [2024-09-28 23:41:48.977655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.960 [2024-09-28 23:41:48.977675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.960 [2024-09-28 23:41:48.977743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.960 [2024-09-28 23:41:48.977848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.960 [2024-09-28 23:41:48.977871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.960 [2024-09-28 23:41:48.977891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.960 [2024-09-28 23:41:48.978001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.960 [2024-09-28 23:41:48.978044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.960 [2024-09-28 23:41:48.978066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.960 [2024-09-28 23:41:48.978126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.960 [2024-09-28 23:41:48.978147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.960 [2024-09-28 23:41:48.978191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.960 [2024-09-28 23:41:48.978248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.960 [2024-09-28 23:41:48.978266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.960 [2024-09-28 23:41:48.978373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.960 [2024-09-28 23:41:48.978445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.960 [2024-09-28 23:41:48.978473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.960 [2024-09-28 23:41:48.978552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.960 [2024-09-28 23:41:48.978576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.960 [2024-09-28 23:41:48.978698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 407.927 ms, result 0 00:21:03.504 00:21:03.504 00:21:03.504 23:41:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:04.914 23:41:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:04.914 [2024-09-28 23:41:53.036399] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:04.914 [2024-09-28 23:41:53.036644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77103 ] 00:21:05.173 [2024-09-28 23:41:53.181817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.435 [2024-09-28 23:41:53.357460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.695 [2024-09-28 23:41:53.607065] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:05.695 [2024-09-28 23:41:53.607307] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:05.695 [2024-09-28 23:41:53.761182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.761348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:05.695 [2024-09-28 23:41:53.761412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:05.695 [2024-09-28 23:41:53.761442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.761503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.761539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.695 [2024-09-28 23:41:53.761559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:05.695 [2024-09-28 23:41:53.761578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.761609] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:05.695 [2024-09-28 23:41:53.762441] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:05.695 [2024-09-28 23:41:53.762568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.762623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.695 [2024-09-28 23:41:53.762682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:21:05.695 [2024-09-28 23:41:53.762705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.763794] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:05.695 [2024-09-28 23:41:53.775977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.776010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:05.695 [2024-09-28 23:41:53.776021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.184 ms 00:21:05.695 [2024-09-28 23:41:53.776029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.776077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.776086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:05.695 [2024-09-28 23:41:53.776094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:05.695 [2024-09-28 23:41:53.776101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.780848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.780979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.695 [2024-09-28 23:41:53.780993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.693 ms 00:21:05.695 [2024-09-28 23:41:53.781001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.781068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.781077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.695 [2024-09-28 23:41:53.781086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:05.695 [2024-09-28 23:41:53.781093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.781135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.781144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:05.695 [2024-09-28 23:41:53.781152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:05.695 [2024-09-28 23:41:53.781159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.781178] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:05.695 [2024-09-28 23:41:53.784408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.784532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.695 [2024-09-28 23:41:53.784548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:21:05.695 [2024-09-28 23:41:53.784556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.784584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.784592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:05.695 [2024-09-28 23:41:53.784599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:05.695 [2024-09-28 23:41:53.784607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.784628] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:05.695 [2024-09-28 23:41:53.784645] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:05.695 [2024-09-28 23:41:53.784679] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:05.695 [2024-09-28 23:41:53.784692] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:05.695 [2024-09-28 23:41:53.784793] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:05.695 [2024-09-28 23:41:53.784803] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:05.695 [2024-09-28 23:41:53.784813] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:05.695 [2024-09-28 23:41:53.784824] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:05.695 [2024-09-28 23:41:53.784833] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:05.695 [2024-09-28 23:41:53.784841] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:05.695 [2024-09-28 23:41:53.784849] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:05.695 [2024-09-28 23:41:53.784856] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:05.695 [2024-09-28 23:41:53.784862] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:05.695 [2024-09-28 23:41:53.784870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.784877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:05.695 [2024-09-28 23:41:53.784885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:21:05.695 [2024-09-28 23:41:53.784892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.784973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.695 [2024-09-28 23:41:53.784983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:05.695 [2024-09-28 23:41:53.784991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:05.695 [2024-09-28 23:41:53.784998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.695 [2024-09-28 23:41:53.785098] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:05.695 [2024-09-28 23:41:53.785107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:05.695 [2024-09-28 23:41:53.785116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.695 [2024-09-28 23:41:53.785123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.695 [2024-09-28 23:41:53.785131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:05.695 [2024-09-28 23:41:53.785137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:05.695 [2024-09-28 23:41:53.785143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:05.695 [2024-09-28 23:41:53.785150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:05.695 [2024-09-28 23:41:53.785157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:05.695 [2024-09-28 23:41:53.785164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.695 [2024-09-28 23:41:53.785172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:05.695 [2024-09-28 23:41:53.785178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:05.695 [2024-09-28 23:41:53.785184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.695 [2024-09-28 23:41:53.785195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:05.695 [2024-09-28 23:41:53.785202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:05.695 [2024-09-28 23:41:53.785208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.695 [2024-09-28 23:41:53.785216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:05.695 [2024-09-28 23:41:53.785224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:05.695 [2024-09-28 23:41:53.785230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.695 [2024-09-28 23:41:53.785237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:05.696 [2024-09-28 23:41:53.785243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:05.696 [2024-09-28 23:41:53.785249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.696 [2024-09-28 23:41:53.785256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:05.696 [2024-09-28 23:41:53.785262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:05.696 [2024-09-28 23:41:53.785268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.696 [2024-09-28 23:41:53.785275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:05.696 [2024-09-28 23:41:53.785282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:05.696 [2024-09-28 23:41:53.785288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.696 [2024-09-28 23:41:53.785294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:05.696 [2024-09-28 23:41:53.785300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:05.696 [2024-09-28 23:41:53.785307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.696 [2024-09-28 23:41:53.785314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:05.696 [2024-09-28 23:41:53.785321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:05.696 [2024-09-28 23:41:53.785327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.696 [2024-09-28 23:41:53.785333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:05.696 [2024-09-28 23:41:53.785339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:05.696 [2024-09-28 23:41:53.785346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.696 [2024-09-28 23:41:53.785352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:05.696 [2024-09-28 23:41:53.785359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:05.696 [2024-09-28 23:41:53.785365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.696 [2024-09-28 23:41:53.785371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:05.696 [2024-09-28 23:41:53.785378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:05.696 [2024-09-28 23:41:53.785384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.696 [2024-09-28 23:41:53.785391] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:05.696 [2024-09-28 23:41:53.785399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:05.696 [2024-09-28 23:41:53.785407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.696 [2024-09-28 23:41:53.785414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.696 [2024-09-28 23:41:53.785422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:05.696 [2024-09-28 23:41:53.785428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:05.696 [2024-09-28 23:41:53.785435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:05.696 [2024-09-28 23:41:53.785443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:05.696 [2024-09-28 23:41:53.785449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:05.696 [2024-09-28 23:41:53.785456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:05.696 [2024-09-28 23:41:53.785463] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:05.696 [2024-09-28 23:41:53.785473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.696 [2024-09-28 23:41:53.785481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:05.696 [2024-09-28 23:41:53.785488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:05.696 [2024-09-28 23:41:53.785495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:05.696 [2024-09-28 23:41:53.785501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:05.696 [2024-09-28 23:41:53.785526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:05.696 [2024-09-28 23:41:53.785533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:05.696 [2024-09-28 23:41:53.785541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:05.696 [2024-09-28 23:41:53.785548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:05.696 [2024-09-28 23:41:53.785555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:05.696 [2024-09-28 23:41:53.785562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:05.696 [2024-09-28 23:41:53.785569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:05.696 [2024-09-28 23:41:53.785575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:05.696 [2024-09-28 23:41:53.785583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:05.696 [2024-09-28 23:41:53.785590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:05.696 [2024-09-28 23:41:53.785597] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:05.696 [2024-09-28 23:41:53.785605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.696 [2024-09-28 23:41:53.785613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:05.696 [2024-09-28 23:41:53.785621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:05.696 [2024-09-28 23:41:53.785628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:05.696 [2024-09-28 23:41:53.785635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:05.696 [2024-09-28 23:41:53.785642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.696 [2024-09-28 23:41:53.785649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:05.696 [2024-09-28 23:41:53.785656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:21:05.696 [2024-09-28 23:41:53.785663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.696 [2024-09-28 23:41:53.821055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.696 [2024-09-28 23:41:53.821107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:05.696 [2024-09-28 23:41:53.821123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.335 ms 00:21:05.696 [2024-09-28 23:41:53.821135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.696 [2024-09-28 23:41:53.821258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.696 [2024-09-28 23:41:53.821270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:05.696 [2024-09-28 23:41:53.821282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:21:05.696 [2024-09-28 23:41:53.821292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.696 [2024-09-28 23:41:53.851477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.696 [2024-09-28 23:41:53.851519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:05.696 [2024-09-28 23:41:53.851532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.113 ms 00:21:05.696 [2024-09-28 23:41:53.851540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.696 [2024-09-28 23:41:53.851568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.696 [2024-09-28 23:41:53.851577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:05.696 [2024-09-28 23:41:53.851585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:05.696 [2024-09-28 23:41:53.851592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.696 [2024-09-28 23:41:53.851911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.696 [2024-09-28 23:41:53.851925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:05.696 [2024-09-28 23:41:53.851934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:21:05.696 [2024-09-28 23:41:53.851945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.696 [2024-09-28 23:41:53.852063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.696 [2024-09-28 23:41:53.852092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:05.696 [2024-09-28 23:41:53.852100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:05.696 [2024-09-28 23:41:53.852107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.955 [2024-09-28 23:41:53.864270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.955 [2024-09-28 23:41:53.864301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:05.955 [2024-09-28 23:41:53.864310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.145 ms 00:21:05.955 [2024-09-28 23:41:53.864317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.955 [2024-09-28 23:41:53.876416] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:05.955 [2024-09-28 23:41:53.876450] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:05.955 [2024-09-28 23:41:53.876462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.955 [2024-09-28 23:41:53.876470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:05.955 [2024-09-28 23:41:53.876479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.041 ms 00:21:05.955 [2024-09-28 23:41:53.876486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.955 [2024-09-28 23:41:53.900971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.955 [2024-09-28 23:41:53.901006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:05.955 [2024-09-28 23:41:53.901017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.438 ms 00:21:05.955 [2024-09-28 23:41:53.901024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.955 [2024-09-28 23:41:53.912419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.955 [2024-09-28 23:41:53.912449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:05.955 [2024-09-28 23:41:53.912458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.374 ms 00:21:05.955 [2024-09-28 23:41:53.912465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.955 [2024-09-28 23:41:53.923545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.955 [2024-09-28 23:41:53.923573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:05.955 [2024-09-28 23:41:53.923583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.050 ms 00:21:05.955 [2024-09-28 23:41:53.923591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.955 [2024-09-28 23:41:53.924182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.955 [2024-09-28 23:41:53.924200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:05.955 [2024-09-28 23:41:53.924209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:21:05.955 [2024-09-28 23:41:53.924216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.955 [2024-09-28 23:41:53.978762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.955 [2024-09-28 23:41:53.978806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:05.955 [2024-09-28 23:41:53.978817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.524 ms 00:21:05.955 [2024-09-28 23:41:53.978824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.955 [2024-09-28 23:41:53.989138] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:05.955 [2024-09-28 23:41:53.991576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.956 [2024-09-28 23:41:53.991606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:05.956 [2024-09-28 23:41:53.991617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.708 ms 00:21:05.956 [2024-09-28 23:41:53.991629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.956 [2024-09-28 23:41:53.991715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.956 [2024-09-28 23:41:53.991725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:05.956 [2024-09-28 23:41:53.991734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:05.956 [2024-09-28 23:41:53.991741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.956 [2024-09-28 23:41:53.993125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.956 [2024-09-28 23:41:53.993157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:05.956 [2024-09-28 23:41:53.993167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:21:05.956 [2024-09-28 23:41:53.993175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.956 [2024-09-28 23:41:53.993201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.956 [2024-09-28 23:41:53.993208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:05.956 [2024-09-28 23:41:53.993216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:05.956 [2024-09-28 23:41:53.993224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.956 [2024-09-28 23:41:53.993254] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:05.956 [2024-09-28 23:41:53.993264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.956 [2024-09-28 23:41:53.993271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:05.956 [2024-09-28 23:41:53.993281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:05.956 [2024-09-28 23:41:53.993288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.956 [2024-09-28 23:41:54.016438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.956 [2024-09-28 23:41:54.016582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:05.956 [2024-09-28 23:41:54.016599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.133 ms 00:21:05.956 [2024-09-28 23:41:54.016607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.956 [2024-09-28 23:41:54.016670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.956 [2024-09-28 23:41:54.016679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:05.956 [2024-09-28 23:41:54.016687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:05.956 [2024-09-28 23:41:54.016695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.956 [2024-09-28 23:41:54.017570] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 255.967 ms, result 0 00:21:31.350  Copying: 1004/1048576 [kB] (1004 kBps) Copying: 5540/1048576 [kB] (4536 kBps) Copying: 52/1024 [MB] (47 MBps) Copying: 104/1024 [MB] (51 MBps) Copying: 157/1024 [MB] (53 MBps) Copying: 208/1024 [MB] (50 MBps) Copying: 261/1024 [MB] (52 MBps) Copying: 314/1024 [MB] (53 MBps) Copying: 367/1024 [MB] (52 MBps) Copying: 401/1024 [MB] (34 MBps) Copying: 429/1024 [MB] (28 MBps) Copying: 462/1024 [MB] (32 MBps) Copying: 496/1024 [MB] (34 MBps) Copying: 534/1024 [MB] (38 MBps) Copying: 572/1024 [MB] (37 MBps) Copying: 604/1024 [MB] (32 MBps) Copying: 652/1024 [MB] (48 MBps) Copying: 696/1024 [MB] (43 MBps) Copying: 741/1024 [MB] (44 MBps) Copying: 783/1024 [MB] (42 MBps) Copying: 827/1024 [MB] (43 MBps) Copying: 880/1024 [MB] (52 MBps) Copying: 933/1024 [MB] (53 MBps) Copying: 985/1024 [MB] (51 MBps) Copying: 1024/1024 [MB] (average 41 MBps)[2024-09-28 23:42:19.514806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.350 [2024-09-28 23:42:19.514860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:31.350 [2024-09-28 23:42:19.514874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:31.350 [2024-09-28 23:42:19.514883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.350 [2024-09-28 23:42:19.514903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.610 [2024-09-28 23:42:19.517472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.517504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:31.610 [2024-09-28 23:42:19.517522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:21:31.610 [2024-09-28 23:42:19.517530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.517743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.517752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:31.610 [2024-09-28 23:42:19.517760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:21:31.610 [2024-09-28 23:42:19.517767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.527610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.527715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:31.610 [2024-09-28 23:42:19.527774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.827 ms 00:21:31.610 [2024-09-28 23:42:19.527802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.534000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.534094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:31.610 [2024-09-28 23:42:19.534148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.147 ms 00:21:31.610 [2024-09-28 23:42:19.534170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.557566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.557669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:31.610 [2024-09-28 23:42:19.557725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.348 ms 00:21:31.610 [2024-09-28 23:42:19.557747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.575171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.575285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:31.610 [2024-09-28 23:42:19.575342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.386 ms 00:21:31.610 [2024-09-28 23:42:19.575365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.577310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.577403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:31.610 [2024-09-28 23:42:19.577517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.906 ms 00:21:31.610 [2024-09-28 23:42:19.577541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.600086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.600178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:31.610 [2024-09-28 23:42:19.600224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.517 ms 00:21:31.610 [2024-09-28 23:42:19.600245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.623148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.623241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:31.610 [2024-09-28 23:42:19.623254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.866 ms 00:21:31.610 [2024-09-28 23:42:19.623261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.645798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.645889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:31.610 [2024-09-28 23:42:19.645933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.510 ms 00:21:31.610 [2024-09-28 23:42:19.645954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.667716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.610 [2024-09-28 23:42:19.667806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:31.610 [2024-09-28 23:42:19.667850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.705 ms 00:21:31.610 [2024-09-28 23:42:19.667871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.610 [2024-09-28 23:42:19.667925] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:31.610 [2024-09-28 23:42:19.667957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:31.610 [2024-09-28 23:42:19.667992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:21:31.610 [2024-09-28 23:42:19.668021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:31.610 [2024-09-28 23:42:19.668049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:31.610 [2024-09-28 23:42:19.668111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:31.610 [2024-09-28 23:42:19.668141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:31.610 [2024-09-28 23:42:19.668169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:31.610 [2024-09-28 23:42:19.668197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:31.610 [2024-09-28 23:42:19.668225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:31.610 [2024-09-28 23:42:19.668253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:31.610 [2024-09-28 23:42:19.668367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.668999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.669980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.670977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.671997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.672028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:31.611 [2024-09-28 23:42:19.672056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:31.612 [2024-09-28 23:42:19.672148] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:31.612 [2024-09-28 23:42:19.672156] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 19962993-5207-4b0b-af8a-a1f71d7fadb9 00:21:31.612 [2024-09-28 23:42:19.672164] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:21:31.612 [2024-09-28 23:42:19.672171] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 138688 00:21:31.612 [2024-09-28 23:42:19.672177] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 136704 00:21:31.612 [2024-09-28 23:42:19.672186] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0145 00:21:31.612 [2024-09-28 23:42:19.672192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:31.612 [2024-09-28 23:42:19.672200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:31.612 [2024-09-28 23:42:19.672207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:31.612 [2024-09-28 23:42:19.672213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:31.612 [2024-09-28 23:42:19.672219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:31.612 [2024-09-28 23:42:19.672227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.612 [2024-09-28 23:42:19.672237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:31.612 [2024-09-28 23:42:19.672253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.304 ms 00:21:31.612 [2024-09-28 23:42:19.672262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.612 [2024-09-28 23:42:19.684462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.612 [2024-09-28 23:42:19.684483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:31.612 [2024-09-28 23:42:19.684493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.175 ms 00:21:31.612 [2024-09-28 23:42:19.684500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.612 [2024-09-28 23:42:19.684874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.612 [2024-09-28 23:42:19.684889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:31.612 [2024-09-28 23:42:19.684897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:21:31.612 [2024-09-28 23:42:19.684904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.612 [2024-09-28 23:42:19.712795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.612 [2024-09-28 23:42:19.712892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.612 [2024-09-28 23:42:19.712936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.612 [2024-09-28 23:42:19.712957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.612 [2024-09-28 23:42:19.713020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.612 [2024-09-28 23:42:19.713044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.612 [2024-09-28 23:42:19.713063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.612 [2024-09-28 23:42:19.713081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.612 [2024-09-28 23:42:19.713139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.612 [2024-09-28 23:42:19.713162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.612 [2024-09-28 23:42:19.713226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.612 [2024-09-28 23:42:19.713248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.612 [2024-09-28 23:42:19.713274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.612 [2024-09-28 23:42:19.713294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.612 [2024-09-28 23:42:19.713316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.612 [2024-09-28 23:42:19.713333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.871 [2024-09-28 23:42:19.788316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.871 [2024-09-28 23:42:19.788466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.871 [2024-09-28 23:42:19.788529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.871 [2024-09-28 23:42:19.788552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.871 [2024-09-28 23:42:19.850112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.871 [2024-09-28 23:42:19.850264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.871 [2024-09-28 23:42:19.850310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.871 [2024-09-28 23:42:19.850332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.871 [2024-09-28 23:42:19.850411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.871 [2024-09-28 23:42:19.850435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:31.871 [2024-09-28 23:42:19.850455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.871 [2024-09-28 23:42:19.850473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.871 [2024-09-28 23:42:19.850534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.871 [2024-09-28 23:42:19.850596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:31.871 [2024-09-28 23:42:19.850619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.871 [2024-09-28 23:42:19.850642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.871 [2024-09-28 23:42:19.850739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.871 [2024-09-28 23:42:19.850884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:31.871 [2024-09-28 23:42:19.850904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.871 [2024-09-28 23:42:19.850924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.871 [2024-09-28 23:42:19.851003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.871 [2024-09-28 23:42:19.851028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:31.871 [2024-09-28 23:42:19.851049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.871 [2024-09-28 23:42:19.851068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.871 [2024-09-28 23:42:19.851116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.871 [2024-09-28 23:42:19.851175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:31.871 [2024-09-28 23:42:19.851198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.871 [2024-09-28 23:42:19.851217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.871 [2024-09-28 23:42:19.851269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.871 [2024-09-28 23:42:19.851293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:31.871 [2024-09-28 23:42:19.851361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.871 [2024-09-28 23:42:19.851384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.871 [2024-09-28 23:42:19.851500] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.678 ms, result 0 00:21:32.808 00:21:32.808 00:21:32.808 23:42:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:34.708 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:34.708 23:42:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:34.708 [2024-09-28 23:42:22.552256] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:34.708 [2024-09-28 23:42:22.552459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77415 ] 00:21:34.708 [2024-09-28 23:42:22.698860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.708 [2024-09-28 23:42:22.872195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.965 [2024-09-28 23:42:23.120589] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.965 [2024-09-28 23:42:23.120643] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.224 [2024-09-28 23:42:23.274168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.224 [2024-09-28 23:42:23.274212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.224 [2024-09-28 23:42:23.274225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:35.224 [2024-09-28 23:42:23.274237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.224 [2024-09-28 23:42:23.274281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.224 [2024-09-28 23:42:23.274292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.224 [2024-09-28 23:42:23.274300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:35.224 [2024-09-28 23:42:23.274308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.224 [2024-09-28 23:42:23.274323] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.224 [2024-09-28 23:42:23.275176] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.224 [2024-09-28 23:42:23.275193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.224 [2024-09-28 23:42:23.275201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.224 [2024-09-28 23:42:23.275209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:21:35.224 [2024-09-28 23:42:23.275215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.224 [2024-09-28 23:42:23.276234] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:35.224 [2024-09-28 23:42:23.288584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.224 [2024-09-28 23:42:23.288614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:35.224 [2024-09-28 23:42:23.288627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.350 ms 00:21:35.224 [2024-09-28 23:42:23.288636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.224 [2024-09-28 23:42:23.288699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.224 [2024-09-28 23:42:23.288711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:35.225 [2024-09-28 23:42:23.288719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:35.225 [2024-09-28 23:42:23.288726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.225 [2024-09-28 23:42:23.293290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.225 [2024-09-28 23:42:23.293322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.225 [2024-09-28 23:42:23.293331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.507 ms 00:21:35.225 [2024-09-28 23:42:23.293339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.225 [2024-09-28 23:42:23.293424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.225 [2024-09-28 23:42:23.293438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.225 [2024-09-28 23:42:23.293446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:35.225 [2024-09-28 23:42:23.293454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.225 [2024-09-28 23:42:23.293527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.225 [2024-09-28 23:42:23.293538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.225 [2024-09-28 23:42:23.293546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:35.225 [2024-09-28 23:42:23.293553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.225 [2024-09-28 23:42:23.293585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.225 [2024-09-28 23:42:23.296904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.225 [2024-09-28 23:42:23.296931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.225 [2024-09-28 23:42:23.296940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:21:35.225 [2024-09-28 23:42:23.296947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.225 [2024-09-28 23:42:23.296973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.225 [2024-09-28 23:42:23.296982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.225 [2024-09-28 23:42:23.296990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:35.225 [2024-09-28 23:42:23.296997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.225 [2024-09-28 23:42:23.297018] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:35.225 [2024-09-28 23:42:23.297036] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:35.225 [2024-09-28 23:42:23.297071] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:35.225 [2024-09-28 23:42:23.297085] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:35.225 [2024-09-28 23:42:23.297186] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:35.225 [2024-09-28 23:42:23.297196] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.225 [2024-09-28 23:42:23.297207] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:35.225 [2024-09-28 23:42:23.297219] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297229] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297237] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:35.225 [2024-09-28 23:42:23.297244] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.225 [2024-09-28 23:42:23.297251] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:35.225 [2024-09-28 23:42:23.297259] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:35.225 [2024-09-28 23:42:23.297266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.225 [2024-09-28 23:42:23.297274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.225 [2024-09-28 23:42:23.297283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:21:35.225 [2024-09-28 23:42:23.297290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.225 [2024-09-28 23:42:23.297371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.225 [2024-09-28 23:42:23.297382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.225 [2024-09-28 23:42:23.297390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:35.225 [2024-09-28 23:42:23.297397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.225 [2024-09-28 23:42:23.297522] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.225 [2024-09-28 23:42:23.297534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.225 [2024-09-28 23:42:23.297543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.225 [2024-09-28 23:42:23.297565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.225 [2024-09-28 23:42:23.297587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.225 [2024-09-28 23:42:23.297601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.225 [2024-09-28 23:42:23.297608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:35.225 [2024-09-28 23:42:23.297614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.225 [2024-09-28 23:42:23.297626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.225 [2024-09-28 23:42:23.297635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:35.225 [2024-09-28 23:42:23.297641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.225 [2024-09-28 23:42:23.297654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.225 [2024-09-28 23:42:23.297675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.225 [2024-09-28 23:42:23.297694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.225 [2024-09-28 23:42:23.297713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.225 [2024-09-28 23:42:23.297733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.225 [2024-09-28 23:42:23.297752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.225 [2024-09-28 23:42:23.297765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.225 [2024-09-28 23:42:23.297771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:35.225 [2024-09-28 23:42:23.297777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.225 [2024-09-28 23:42:23.297784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:35.225 [2024-09-28 23:42:23.297792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:35.225 [2024-09-28 23:42:23.297798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:35.225 [2024-09-28 23:42:23.297811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:35.225 [2024-09-28 23:42:23.297817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297824] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.225 [2024-09-28 23:42:23.297831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.225 [2024-09-28 23:42:23.297840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.225 [2024-09-28 23:42:23.297855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.225 [2024-09-28 23:42:23.297862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.225 [2024-09-28 23:42:23.297869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.225 [2024-09-28 23:42:23.297876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.225 [2024-09-28 23:42:23.297882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.225 [2024-09-28 23:42:23.297888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.225 [2024-09-28 23:42:23.297896] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.225 [2024-09-28 23:42:23.297905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.225 [2024-09-28 23:42:23.297913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:35.225 [2024-09-28 23:42:23.297920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:35.225 [2024-09-28 23:42:23.297927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:35.225 [2024-09-28 23:42:23.297934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:35.226 [2024-09-28 23:42:23.297941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:35.226 [2024-09-28 23:42:23.297948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:35.226 [2024-09-28 23:42:23.297955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:35.226 [2024-09-28 23:42:23.297961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:35.226 [2024-09-28 23:42:23.297968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:35.226 [2024-09-28 23:42:23.297975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:35.226 [2024-09-28 23:42:23.297982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:35.226 [2024-09-28 23:42:23.297989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:35.226 [2024-09-28 23:42:23.297996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:35.226 [2024-09-28 23:42:23.298004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:35.226 [2024-09-28 23:42:23.298011] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.226 [2024-09-28 23:42:23.298019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.226 [2024-09-28 23:42:23.298026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.226 [2024-09-28 23:42:23.298033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.226 [2024-09-28 23:42:23.298040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.226 [2024-09-28 23:42:23.298047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.226 [2024-09-28 23:42:23.298054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.226 [2024-09-28 23:42:23.298062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.226 [2024-09-28 23:42:23.298071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:21:35.226 [2024-09-28 23:42:23.298078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.226 [2024-09-28 23:42:23.333608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.226 [2024-09-28 23:42:23.333650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.226 [2024-09-28 23:42:23.333664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.487 ms 00:21:35.226 [2024-09-28 23:42:23.333674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.226 [2024-09-28 23:42:23.333780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.226 [2024-09-28 23:42:23.333792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.226 [2024-09-28 23:42:23.333802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:35.226 [2024-09-28 23:42:23.333810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.226 [2024-09-28 23:42:23.363983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.226 [2024-09-28 23:42:23.364129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.226 [2024-09-28 23:42:23.364148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.109 ms 00:21:35.226 [2024-09-28 23:42:23.364156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.226 [2024-09-28 23:42:23.364187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.226 [2024-09-28 23:42:23.364197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.226 [2024-09-28 23:42:23.364205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:35.226 [2024-09-28 23:42:23.364213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.226 [2024-09-28 23:42:23.364551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.226 [2024-09-28 23:42:23.364567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.226 [2024-09-28 23:42:23.364576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:21:35.226 [2024-09-28 23:42:23.364588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.226 [2024-09-28 23:42:23.364705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.226 [2024-09-28 23:42:23.364715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.226 [2024-09-28 23:42:23.364724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:35.226 [2024-09-28 23:42:23.364733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.226 [2024-09-28 23:42:23.376814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.226 [2024-09-28 23:42:23.376840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.226 [2024-09-28 23:42:23.376850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.063 ms 00:21:35.226 [2024-09-28 23:42:23.376857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.226 [2024-09-28 23:42:23.389091] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:35.226 [2024-09-28 23:42:23.389122] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:35.226 [2024-09-28 23:42:23.389133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.226 [2024-09-28 23:42:23.389141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:35.226 [2024-09-28 23:42:23.389151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.173 ms 00:21:35.226 [2024-09-28 23:42:23.389157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.413173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.413203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:35.485 [2024-09-28 23:42:23.413214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.981 ms 00:21:35.485 [2024-09-28 23:42:23.413222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.424212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.424239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:35.485 [2024-09-28 23:42:23.424249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.954 ms 00:21:35.485 [2024-09-28 23:42:23.424256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.435450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.435579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:35.485 [2024-09-28 23:42:23.435594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.157 ms 00:21:35.485 [2024-09-28 23:42:23.435602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.436190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.436209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:35.485 [2024-09-28 23:42:23.436219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:21:35.485 [2024-09-28 23:42:23.436226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.490984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.491153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:35.485 [2024-09-28 23:42:23.491170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.741 ms 00:21:35.485 [2024-09-28 23:42:23.491178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.501228] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:35.485 [2024-09-28 23:42:23.503410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.503433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:35.485 [2024-09-28 23:42:23.503445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.199 ms 00:21:35.485 [2024-09-28 23:42:23.503457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.503557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.503570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:35.485 [2024-09-28 23:42:23.503580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:35.485 [2024-09-28 23:42:23.503589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.504126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.504150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:35.485 [2024-09-28 23:42:23.504159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:21:35.485 [2024-09-28 23:42:23.504167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.504191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.504199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:35.485 [2024-09-28 23:42:23.504209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:35.485 [2024-09-28 23:42:23.504217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.504247] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:35.485 [2024-09-28 23:42:23.504256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.504263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:35.485 [2024-09-28 23:42:23.504274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:35.485 [2024-09-28 23:42:23.504282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.527249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.527284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:35.485 [2024-09-28 23:42:23.527296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.950 ms 00:21:35.485 [2024-09-28 23:42:23.527304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.527373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.485 [2024-09-28 23:42:23.527383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:35.485 [2024-09-28 23:42:23.527392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:35.485 [2024-09-28 23:42:23.527400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.485 [2024-09-28 23:42:23.528422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 253.860 ms, result 0 00:21:59.169  Copying: 44/1024 [MB] (44 MBps) Copying: 89/1024 [MB] (45 MBps) Copying: 137/1024 [MB] (47 MBps) Copying: 183/1024 [MB] (45 MBps) Copying: 229/1024 [MB] (46 MBps) Copying: 276/1024 [MB] (46 MBps) Copying: 321/1024 [MB] (45 MBps) Copying: 367/1024 [MB] (45 MBps) Copying: 413/1024 [MB] (46 MBps) Copying: 462/1024 [MB] (48 MBps) Copying: 508/1024 [MB] (46 MBps) Copying: 552/1024 [MB] (43 MBps) Copying: 598/1024 [MB] (45 MBps) Copying: 647/1024 [MB] (48 MBps) Copying: 674/1024 [MB] (26 MBps) Copying: 695/1024 [MB] (20 MBps) Copying: 733/1024 [MB] (38 MBps) Copying: 779/1024 [MB] (45 MBps) Copying: 825/1024 [MB] (46 MBps) Copying: 875/1024 [MB] (49 MBps) Copying: 922/1024 [MB] (46 MBps) Copying: 966/1024 [MB] (44 MBps) Copying: 1013/1024 [MB] (46 MBps) Copying: 1024/1024 [MB] (average 44 MBps)[2024-09-28 23:42:47.097122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.169 [2024-09-28 23:42:47.097204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:59.169 [2024-09-28 23:42:47.097227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:59.170 [2024-09-28 23:42:47.097247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.097283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:59.170 [2024-09-28 23:42:47.101786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.101834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:59.170 [2024-09-28 23:42:47.101851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.479 ms 00:21:59.170 [2024-09-28 23:42:47.101866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.102243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.102267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:59.170 [2024-09-28 23:42:47.102281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:21:59.170 [2024-09-28 23:42:47.102294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.107321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.107345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:59.170 [2024-09-28 23:42:47.107356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.000 ms 00:21:59.170 [2024-09-28 23:42:47.107366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.114285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.114312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:59.170 [2024-09-28 23:42:47.114323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.901 ms 00:21:59.170 [2024-09-28 23:42:47.114333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.137892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.137925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:59.170 [2024-09-28 23:42:47.137936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.496 ms 00:21:59.170 [2024-09-28 23:42:47.137943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.151851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.151990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:59.170 [2024-09-28 23:42:47.152007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.889 ms 00:21:59.170 [2024-09-28 23:42:47.152015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.154230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.154262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:59.170 [2024-09-28 23:42:47.154271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.192 ms 00:21:59.170 [2024-09-28 23:42:47.154279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.177396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.177435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:59.170 [2024-09-28 23:42:47.177445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.103 ms 00:21:59.170 [2024-09-28 23:42:47.177451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.200033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.200064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:59.170 [2024-09-28 23:42:47.200074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.563 ms 00:21:59.170 [2024-09-28 23:42:47.200081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.222213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.222242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:59.170 [2024-09-28 23:42:47.222252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.115 ms 00:21:59.170 [2024-09-28 23:42:47.222259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.244836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.170 [2024-09-28 23:42:47.244867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:59.170 [2024-09-28 23:42:47.244877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.540 ms 00:21:59.170 [2024-09-28 23:42:47.244884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.170 [2024-09-28 23:42:47.244900] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:59.170 [2024-09-28 23:42:47.244911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:59.170 [2024-09-28 23:42:47.244921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:21:59.170 [2024-09-28 23:42:47.244930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.244937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.244945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.244952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.244959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.244967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.244974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.244982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.244989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.244996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:59.170 [2024-09-28 23:42:47.245294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:59.171 [2024-09-28 23:42:47.245696] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:59.171 [2024-09-28 23:42:47.245703] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 19962993-5207-4b0b-af8a-a1f71d7fadb9 00:21:59.171 [2024-09-28 23:42:47.245712] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:21:59.171 [2024-09-28 23:42:47.245720] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:59.171 [2024-09-28 23:42:47.245727] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:59.171 [2024-09-28 23:42:47.245734] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:59.171 [2024-09-28 23:42:47.245741] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:59.171 [2024-09-28 23:42:47.245753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:59.171 [2024-09-28 23:42:47.245760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:59.171 [2024-09-28 23:42:47.245766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:59.171 [2024-09-28 23:42:47.245773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:59.171 [2024-09-28 23:42:47.245780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.171 [2024-09-28 23:42:47.245793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:59.171 [2024-09-28 23:42:47.245810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:21:59.171 [2024-09-28 23:42:47.245817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.171 [2024-09-28 23:42:47.258168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.171 [2024-09-28 23:42:47.258197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:59.171 [2024-09-28 23:42:47.258207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.336 ms 00:21:59.171 [2024-09-28 23:42:47.258219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.171 [2024-09-28 23:42:47.258588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.171 [2024-09-28 23:42:47.258603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:59.171 [2024-09-28 23:42:47.258612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:21:59.171 [2024-09-28 23:42:47.258620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.171 [2024-09-28 23:42:47.286432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.171 [2024-09-28 23:42:47.286468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.171 [2024-09-28 23:42:47.286478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.171 [2024-09-28 23:42:47.286485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.171 [2024-09-28 23:42:47.286554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.171 [2024-09-28 23:42:47.286563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.171 [2024-09-28 23:42:47.286571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.171 [2024-09-28 23:42:47.286578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.171 [2024-09-28 23:42:47.286623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.171 [2024-09-28 23:42:47.286632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.171 [2024-09-28 23:42:47.286640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.171 [2024-09-28 23:42:47.286650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.171 [2024-09-28 23:42:47.286664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.171 [2024-09-28 23:42:47.286672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.171 [2024-09-28 23:42:47.286683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.171 [2024-09-28 23:42:47.286694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.430 [2024-09-28 23:42:47.361789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.430 [2024-09-28 23:42:47.361834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.430 [2024-09-28 23:42:47.361851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.430 [2024-09-28 23:42:47.361859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.430 [2024-09-28 23:42:47.424415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.430 [2024-09-28 23:42:47.424460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:59.430 [2024-09-28 23:42:47.424472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.430 [2024-09-28 23:42:47.424479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.430 [2024-09-28 23:42:47.424575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.430 [2024-09-28 23:42:47.424586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:59.430 [2024-09-28 23:42:47.424594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.430 [2024-09-28 23:42:47.424602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.430 [2024-09-28 23:42:47.424638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.430 [2024-09-28 23:42:47.424647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:59.430 [2024-09-28 23:42:47.424655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.430 [2024-09-28 23:42:47.424662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.430 [2024-09-28 23:42:47.424743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.430 [2024-09-28 23:42:47.424752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:59.430 [2024-09-28 23:42:47.424759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.430 [2024-09-28 23:42:47.424767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.430 [2024-09-28 23:42:47.424796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.430 [2024-09-28 23:42:47.424804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:59.430 [2024-09-28 23:42:47.424812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.430 [2024-09-28 23:42:47.424819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.430 [2024-09-28 23:42:47.424850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.430 [2024-09-28 23:42:47.424858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:59.430 [2024-09-28 23:42:47.424866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.430 [2024-09-28 23:42:47.424873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.430 [2024-09-28 23:42:47.424911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.430 [2024-09-28 23:42:47.424921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:59.430 [2024-09-28 23:42:47.424929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.430 [2024-09-28 23:42:47.424936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.430 [2024-09-28 23:42:47.425037] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 327.907 ms, result 0 00:22:00.365 00:22:00.365 00:22:00.365 23:42:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:02.267 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:22:02.268 23:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:22:02.268 23:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:22:02.268 23:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:02.268 23:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:02.268 23:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:02.526 23:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:02.526 23:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:02.526 Process with pid 76149 is not found 00:22:02.526 23:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 76149 00:22:02.526 23:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 76149 ']' 00:22:02.526 23:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 76149 00:22:02.526 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76149) - No such process 00:22:02.526 23:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 76149 is not found' 00:22:02.526 23:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:22:02.786 Remove shared memory files 00:22:02.786 23:42:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:22:02.786 23:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:02.786 23:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:22:02.786 23:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:22:02.786 23:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:22:02.786 23:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:02.786 23:42:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:22:02.786 ************************************ 00:22:02.786 END TEST ftl_dirty_shutdown 00:22:02.786 ************************************ 00:22:02.786 00:22:02.786 real 2m23.659s 00:22:02.786 user 2m41.043s 00:22:02.786 sys 0m22.401s 00:22:02.786 23:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:02.786 23:42:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:02.786 23:42:50 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:02.786 23:42:50 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:02.786 23:42:50 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:02.786 23:42:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:02.786 ************************************ 00:22:02.786 START TEST ftl_upgrade_shutdown 00:22:02.786 ************************************ 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:02.786 * Looking for test storage... 00:22:02.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.786 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:02.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.786 --rc genhtml_branch_coverage=1 00:22:02.786 --rc genhtml_function_coverage=1 00:22:02.786 --rc genhtml_legend=1 00:22:02.786 --rc geninfo_all_blocks=1 00:22:02.786 --rc geninfo_unexecuted_blocks=1 00:22:02.786 00:22:02.787 ' 00:22:02.787 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:02.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.787 --rc genhtml_branch_coverage=1 00:22:02.787 --rc genhtml_function_coverage=1 00:22:02.787 --rc genhtml_legend=1 00:22:02.787 --rc geninfo_all_blocks=1 00:22:02.787 --rc geninfo_unexecuted_blocks=1 00:22:02.787 00:22:02.787 ' 00:22:02.787 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:02.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.787 --rc genhtml_branch_coverage=1 00:22:02.787 --rc genhtml_function_coverage=1 00:22:02.787 --rc genhtml_legend=1 00:22:02.787 --rc geninfo_all_blocks=1 00:22:02.787 --rc geninfo_unexecuted_blocks=1 00:22:02.787 00:22:02.787 ' 00:22:02.787 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:02.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.787 --rc genhtml_branch_coverage=1 00:22:02.787 --rc genhtml_function_coverage=1 00:22:02.787 --rc genhtml_legend=1 00:22:02.787 --rc geninfo_all_blocks=1 00:22:02.787 --rc geninfo_unexecuted_blocks=1 00:22:02.787 00:22:02.787 ' 00:22:02.787 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:02.787 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=77780 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 77780 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77780 ']' 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.046 23:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:03.046 [2024-09-28 23:42:51.047623] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:03.046 [2024-09-28 23:42:51.047865] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77780 ] 00:22:03.046 [2024-09-28 23:42:51.196555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.305 [2024-09-28 23:42:51.370901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:03.874 23:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:22:04.132 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:22:04.133 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:04.133 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:22:04.133 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:22:04.133 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:04.133 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:04.133 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:04.133 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:04.392 { 00:22:04.392 "name": "basen1", 00:22:04.392 "aliases": [ 00:22:04.392 "1b9b8593-a60e-4e82-af08-0dbde63240eb" 00:22:04.392 ], 00:22:04.392 "product_name": "NVMe disk", 00:22:04.392 "block_size": 4096, 00:22:04.392 "num_blocks": 1310720, 00:22:04.392 "uuid": "1b9b8593-a60e-4e82-af08-0dbde63240eb", 00:22:04.392 "numa_id": -1, 00:22:04.392 "assigned_rate_limits": { 00:22:04.392 "rw_ios_per_sec": 0, 00:22:04.392 "rw_mbytes_per_sec": 0, 00:22:04.392 "r_mbytes_per_sec": 0, 00:22:04.392 "w_mbytes_per_sec": 0 00:22:04.392 }, 00:22:04.392 "claimed": true, 00:22:04.392 "claim_type": "read_many_write_one", 00:22:04.392 "zoned": false, 00:22:04.392 "supported_io_types": { 00:22:04.392 "read": true, 00:22:04.392 "write": true, 00:22:04.392 "unmap": true, 00:22:04.392 "flush": true, 00:22:04.392 "reset": true, 00:22:04.392 "nvme_admin": true, 00:22:04.392 "nvme_io": true, 00:22:04.392 "nvme_io_md": false, 00:22:04.392 "write_zeroes": true, 00:22:04.392 "zcopy": false, 00:22:04.392 "get_zone_info": false, 00:22:04.392 "zone_management": false, 00:22:04.392 "zone_append": false, 00:22:04.392 "compare": true, 00:22:04.392 "compare_and_write": false, 00:22:04.392 "abort": true, 00:22:04.392 "seek_hole": false, 00:22:04.392 "seek_data": false, 00:22:04.392 "copy": true, 00:22:04.392 "nvme_iov_md": false 00:22:04.392 }, 00:22:04.392 "driver_specific": { 00:22:04.392 "nvme": [ 00:22:04.392 { 00:22:04.392 "pci_address": "0000:00:11.0", 00:22:04.392 "trid": { 00:22:04.392 "trtype": "PCIe", 00:22:04.392 "traddr": "0000:00:11.0" 00:22:04.392 }, 00:22:04.392 "ctrlr_data": { 00:22:04.392 "cntlid": 0, 00:22:04.392 "vendor_id": "0x1b36", 00:22:04.392 "model_number": "QEMU NVMe Ctrl", 00:22:04.392 "serial_number": "12341", 00:22:04.392 "firmware_revision": "8.0.0", 00:22:04.392 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:04.392 "oacs": { 00:22:04.392 "security": 0, 00:22:04.392 "format": 1, 00:22:04.392 "firmware": 0, 00:22:04.392 "ns_manage": 1 00:22:04.392 }, 00:22:04.392 "multi_ctrlr": false, 00:22:04.392 "ana_reporting": false 00:22:04.392 }, 00:22:04.392 "vs": { 00:22:04.392 "nvme_version": "1.4" 00:22:04.392 }, 00:22:04.392 "ns_data": { 00:22:04.392 "id": 1, 00:22:04.392 "can_share": false 00:22:04.392 } 00:22:04.392 } 00:22:04.392 ], 00:22:04.392 "mp_policy": "active_passive" 00:22:04.392 } 00:22:04.392 } 00:22:04.392 ]' 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:04.392 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:04.651 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=b036f953-b197-47fd-ba88-1f61e199ac27 00:22:04.651 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:04.651 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b036f953-b197-47fd-ba88-1f61e199ac27 00:22:04.910 23:42:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=19eda7f8-0e65-448b-b6de-6a815cc66f01 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 19eda7f8-0e65-448b-b6de-6a815cc66f01 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=8ed6974a-6b98-4bfc-b41d-d484c4590d46 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 8ed6974a-6b98-4bfc-b41d-d484c4590d46 ]] 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 8ed6974a-6b98-4bfc-b41d-d484c4590d46 5120 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=8ed6974a-6b98-4bfc-b41d-d484c4590d46 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8ed6974a-6b98-4bfc-b41d-d484c4590d46 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8ed6974a-6b98-4bfc-b41d-d484c4590d46 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:05.169 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ed6974a-6b98-4bfc-b41d-d484c4590d46 00:22:05.428 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:05.428 { 00:22:05.428 "name": "8ed6974a-6b98-4bfc-b41d-d484c4590d46", 00:22:05.428 "aliases": [ 00:22:05.428 "lvs/basen1p0" 00:22:05.428 ], 00:22:05.428 "product_name": "Logical Volume", 00:22:05.428 "block_size": 4096, 00:22:05.428 "num_blocks": 5242880, 00:22:05.428 "uuid": "8ed6974a-6b98-4bfc-b41d-d484c4590d46", 00:22:05.428 "assigned_rate_limits": { 00:22:05.428 "rw_ios_per_sec": 0, 00:22:05.428 "rw_mbytes_per_sec": 0, 00:22:05.428 "r_mbytes_per_sec": 0, 00:22:05.428 "w_mbytes_per_sec": 0 00:22:05.428 }, 00:22:05.428 "claimed": false, 00:22:05.428 "zoned": false, 00:22:05.428 "supported_io_types": { 00:22:05.428 "read": true, 00:22:05.428 "write": true, 00:22:05.428 "unmap": true, 00:22:05.428 "flush": false, 00:22:05.428 "reset": true, 00:22:05.428 "nvme_admin": false, 00:22:05.428 "nvme_io": false, 00:22:05.428 "nvme_io_md": false, 00:22:05.428 "write_zeroes": true, 00:22:05.428 "zcopy": false, 00:22:05.428 "get_zone_info": false, 00:22:05.428 "zone_management": false, 00:22:05.428 "zone_append": false, 00:22:05.428 "compare": false, 00:22:05.428 "compare_and_write": false, 00:22:05.428 "abort": false, 00:22:05.428 "seek_hole": true, 00:22:05.428 "seek_data": true, 00:22:05.428 "copy": false, 00:22:05.428 "nvme_iov_md": false 00:22:05.428 }, 00:22:05.428 "driver_specific": { 00:22:05.428 "lvol": { 00:22:05.428 "lvol_store_uuid": "19eda7f8-0e65-448b-b6de-6a815cc66f01", 00:22:05.428 "base_bdev": "basen1", 00:22:05.428 "thin_provision": true, 00:22:05.428 "num_allocated_clusters": 0, 00:22:05.428 "snapshot": false, 00:22:05.428 "clone": false, 00:22:05.428 "esnap_clone": false 00:22:05.428 } 00:22:05.428 } 00:22:05.428 } 00:22:05.428 ]' 00:22:05.428 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:05.428 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:05.428 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:05.428 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:22:05.428 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:22:05.428 23:42:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:22:05.428 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:22:05.428 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:05.428 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:22:05.687 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:22:05.687 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:22:05.687 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:22:05.946 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:22:05.946 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:22:05.946 23:42:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 8ed6974a-6b98-4bfc-b41d-d484c4590d46 -c cachen1p0 --l2p_dram_limit 2 00:22:06.206 [2024-09-28 23:42:54.162173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.206 [2024-09-28 23:42:54.162215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:06.206 [2024-09-28 23:42:54.162227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:06.206 [2024-09-28 23:42:54.162234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.206 [2024-09-28 23:42:54.162274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.206 [2024-09-28 23:42:54.162282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:06.206 [2024-09-28 23:42:54.162290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:22:06.206 [2024-09-28 23:42:54.162296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.206 [2024-09-28 23:42:54.162314] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:06.206 [2024-09-28 23:42:54.162893] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:06.206 [2024-09-28 23:42:54.162919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.206 [2024-09-28 23:42:54.162925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:06.206 [2024-09-28 23:42:54.162933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.608 ms 00:22:06.206 [2024-09-28 23:42:54.162941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.206 [2024-09-28 23:42:54.162969] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 50b9279f-27fa-4b89-b5a0-681976440219 00:22:06.206 [2024-09-28 23:42:54.163909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.206 [2024-09-28 23:42:54.163937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:22:06.206 [2024-09-28 23:42:54.163944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:22:06.206 [2024-09-28 23:42:54.163953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.206 [2024-09-28 23:42:54.168615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.206 [2024-09-28 23:42:54.168641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:06.206 [2024-09-28 23:42:54.168649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.606 ms 00:22:06.206 [2024-09-28 23:42:54.168655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.206 [2024-09-28 23:42:54.168684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.206 [2024-09-28 23:42:54.168692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:06.206 [2024-09-28 23:42:54.168698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:22:06.206 [2024-09-28 23:42:54.168710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.206 [2024-09-28 23:42:54.168743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.206 [2024-09-28 23:42:54.168752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:06.206 [2024-09-28 23:42:54.168757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:06.206 [2024-09-28 23:42:54.168764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.206 [2024-09-28 23:42:54.168781] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:06.206 [2024-09-28 23:42:54.171644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.206 [2024-09-28 23:42:54.171667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:06.206 [2024-09-28 23:42:54.171676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.866 ms 00:22:06.206 [2024-09-28 23:42:54.171682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.206 [2024-09-28 23:42:54.171708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.206 [2024-09-28 23:42:54.171715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:06.206 [2024-09-28 23:42:54.171722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:06.207 [2024-09-28 23:42:54.171729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.207 [2024-09-28 23:42:54.171743] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:22:06.207 [2024-09-28 23:42:54.171846] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:06.207 [2024-09-28 23:42:54.171858] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:06.207 [2024-09-28 23:42:54.171866] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:06.207 [2024-09-28 23:42:54.171877] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:06.207 [2024-09-28 23:42:54.171884] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:06.207 [2024-09-28 23:42:54.171891] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:06.207 [2024-09-28 23:42:54.171897] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:06.207 [2024-09-28 23:42:54.171904] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:06.207 [2024-09-28 23:42:54.171909] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:06.207 [2024-09-28 23:42:54.171916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.207 [2024-09-28 23:42:54.171921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:06.207 [2024-09-28 23:42:54.171928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:22:06.207 [2024-09-28 23:42:54.171934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.207 [2024-09-28 23:42:54.171998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.207 [2024-09-28 23:42:54.172012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:06.207 [2024-09-28 23:42:54.172019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:22:06.207 [2024-09-28 23:42:54.172024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.207 [2024-09-28 23:42:54.172099] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:06.207 [2024-09-28 23:42:54.172106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:06.207 [2024-09-28 23:42:54.172114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:06.207 [2024-09-28 23:42:54.172119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:06.207 [2024-09-28 23:42:54.172131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:06.207 [2024-09-28 23:42:54.172143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:06.207 [2024-09-28 23:42:54.172150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:06.207 [2024-09-28 23:42:54.172155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:06.207 [2024-09-28 23:42:54.172167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:06.207 [2024-09-28 23:42:54.172173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:06.207 [2024-09-28 23:42:54.172185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:06.207 [2024-09-28 23:42:54.172190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:06.207 [2024-09-28 23:42:54.172203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:06.207 [2024-09-28 23:42:54.172209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:06.207 [2024-09-28 23:42:54.172221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:06.207 [2024-09-28 23:42:54.172227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:06.207 [2024-09-28 23:42:54.172233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:06.207 [2024-09-28 23:42:54.172238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:06.207 [2024-09-28 23:42:54.172244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:06.207 [2024-09-28 23:42:54.172248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:06.207 [2024-09-28 23:42:54.172254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:06.207 [2024-09-28 23:42:54.172259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:06.207 [2024-09-28 23:42:54.172265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:06.207 [2024-09-28 23:42:54.172270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:06.207 [2024-09-28 23:42:54.172276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:06.207 [2024-09-28 23:42:54.172281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:06.207 [2024-09-28 23:42:54.172288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:06.207 [2024-09-28 23:42:54.172293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:06.207 [2024-09-28 23:42:54.172304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:06.207 [2024-09-28 23:42:54.172309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:06.207 [2024-09-28 23:42:54.172320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:06.207 [2024-09-28 23:42:54.172335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:06.207 [2024-09-28 23:42:54.172341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172346] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:06.207 [2024-09-28 23:42:54.172353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:06.207 [2024-09-28 23:42:54.172360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:06.207 [2024-09-28 23:42:54.172369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.207 [2024-09-28 23:42:54.172375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:06.207 [2024-09-28 23:42:54.172383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:06.207 [2024-09-28 23:42:54.172388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:06.207 [2024-09-28 23:42:54.172394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:06.207 [2024-09-28 23:42:54.172399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:06.207 [2024-09-28 23:42:54.172405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:06.207 [2024-09-28 23:42:54.172413] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:06.207 [2024-09-28 23:42:54.172421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:06.207 [2024-09-28 23:42:54.172434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:06.207 [2024-09-28 23:42:54.172451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:06.207 [2024-09-28 23:42:54.172458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:06.207 [2024-09-28 23:42:54.172463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:06.207 [2024-09-28 23:42:54.172469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:06.207 [2024-09-28 23:42:54.172527] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:06.207 [2024-09-28 23:42:54.172536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.207 [2024-09-28 23:42:54.172549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:06.207 [2024-09-28 23:42:54.172555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:06.207 [2024-09-28 23:42:54.172562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:06.207 [2024-09-28 23:42:54.172568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.207 [2024-09-28 23:42:54.172575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:06.207 [2024-09-28 23:42:54.172582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.523 ms 00:22:06.207 [2024-09-28 23:42:54.172589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.207 [2024-09-28 23:42:54.172620] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:22:06.207 [2024-09-28 23:42:54.172630] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:22:09.495 [2024-09-28 23:42:57.200481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.200549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:22:09.495 [2024-09-28 23:42:57.200564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3027.851 ms 00:22:09.495 [2024-09-28 23:42:57.200574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.225941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.225987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:09.495 [2024-09-28 23:42:57.225998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.175 ms 00:22:09.495 [2024-09-28 23:42:57.226008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.226076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.226088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:09.495 [2024-09-28 23:42:57.226097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:22:09.495 [2024-09-28 23:42:57.226111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.266267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.266483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:09.495 [2024-09-28 23:42:57.266532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.108 ms 00:22:09.495 [2024-09-28 23:42:57.266551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.266601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.266617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:09.495 [2024-09-28 23:42:57.266630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:09.495 [2024-09-28 23:42:57.266643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.267074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.267101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:09.495 [2024-09-28 23:42:57.267122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.359 ms 00:22:09.495 [2024-09-28 23:42:57.267138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.267192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.267207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:09.495 [2024-09-28 23:42:57.267218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:22:09.495 [2024-09-28 23:42:57.267234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.282936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.282970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:09.495 [2024-09-28 23:42:57.282980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.677 ms 00:22:09.495 [2024-09-28 23:42:57.282988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.294229] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:09.495 [2024-09-28 23:42:57.295094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.295123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:09.495 [2024-09-28 23:42:57.295137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.033 ms 00:22:09.495 [2024-09-28 23:42:57.295144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.318286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.318320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:22:09.495 [2024-09-28 23:42:57.318335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.117 ms 00:22:09.495 [2024-09-28 23:42:57.318342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.318424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.318434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:09.495 [2024-09-28 23:42:57.318446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:22:09.495 [2024-09-28 23:42:57.318453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.341448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.341590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:22:09.495 [2024-09-28 23:42:57.341612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.949 ms 00:22:09.495 [2024-09-28 23:42:57.341621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.365457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.365606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:22:09.495 [2024-09-28 23:42:57.365628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.574 ms 00:22:09.495 [2024-09-28 23:42:57.365636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.366186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.366204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:09.495 [2024-09-28 23:42:57.366214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:22:09.495 [2024-09-28 23:42:57.366221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.439030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.439066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:22:09.495 [2024-09-28 23:42:57.439081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.772 ms 00:22:09.495 [2024-09-28 23:42:57.439090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.463876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.463906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:22:09.495 [2024-09-28 23:42:57.463919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.702 ms 00:22:09.495 [2024-09-28 23:42:57.463926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.487465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.487496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:22:09.495 [2024-09-28 23:42:57.487521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.500 ms 00:22:09.495 [2024-09-28 23:42:57.487529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.511466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.511498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:09.495 [2024-09-28 23:42:57.511523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.900 ms 00:22:09.495 [2024-09-28 23:42:57.511531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.511570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.511580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:09.495 [2024-09-28 23:42:57.511594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:09.495 [2024-09-28 23:42:57.511602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.495 [2024-09-28 23:42:57.511677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:09.495 [2024-09-28 23:42:57.511690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:09.496 [2024-09-28 23:42:57.511699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:22:09.496 [2024-09-28 23:42:57.511706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:09.496 [2024-09-28 23:42:57.512548] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3349.926 ms, result 0 00:22:09.496 { 00:22:09.496 "name": "ftl", 00:22:09.496 "uuid": "50b9279f-27fa-4b89-b5a0-681976440219" 00:22:09.496 } 00:22:09.496 23:42:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:22:09.754 [2024-09-28 23:42:57.723965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.754 23:42:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:22:10.013 23:42:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:22:10.013 [2024-09-28 23:42:58.140374] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:10.013 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:22:10.272 [2024-09-28 23:42:58.353367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:10.272 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:10.531 Fill FTL, iteration 1 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:10.531 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=77892 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 77892 /var/tmp/spdk.tgt.sock 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77892 ']' 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:22:10.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.790 23:42:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:10.790 [2024-09-28 23:42:58.769546] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:10.790 [2024-09-28 23:42:58.769862] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77892 ] 00:22:10.790 [2024-09-28 23:42:58.920167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.049 [2024-09-28 23:42:59.102993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.615 23:42:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.615 23:42:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:11.615 23:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:22:11.874 ftln1 00:22:11.874 23:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:22:11.874 23:42:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 77892 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77892 ']' 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77892 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77892 00:22:12.133 killing process with pid 77892 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77892' 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77892 00:22:12.133 23:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77892 00:22:14.037 23:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:22:14.037 23:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:14.037 [2024-09-28 23:43:01.760372] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:14.037 [2024-09-28 23:43:01.760486] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77941 ] 00:22:14.037 [2024-09-28 23:43:01.910557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.037 [2024-09-28 23:43:02.082707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.228  Copying: 243/1024 [MB] (243 MBps) Copying: 494/1024 [MB] (251 MBps) Copying: 743/1024 [MB] (249 MBps) Copying: 993/1024 [MB] (250 MBps) Copying: 1024/1024 [MB] (average 247 MBps) 00:22:19.228 00:22:19.228 Calculate MD5 checksum, iteration 1 00:22:19.228 23:43:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:22:19.228 23:43:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:22:19.228 23:43:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:19.228 23:43:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:19.228 23:43:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:19.228 23:43:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:19.228 23:43:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:19.228 23:43:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:19.228 [2024-09-28 23:43:07.259333] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:19.228 [2024-09-28 23:43:07.259423] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78007 ] 00:22:19.487 [2024-09-28 23:43:07.395647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.487 [2024-09-28 23:43:07.535174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.056  Copying: 684/1024 [MB] (684 MBps) Copying: 1024/1024 [MB] (average 677 MBps) 00:22:22.056 00:22:22.056 23:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:22:22.056 23:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f90f235d9159d7e34022074dad49f90d 00:22:23.954 Fill FTL, iteration 2 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:23.954 23:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:23.954 [2024-09-28 23:43:11.825238] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:23.954 [2024-09-28 23:43:11.825900] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78057 ] 00:22:23.954 [2024-09-28 23:43:11.972769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.212 [2024-09-28 23:43:12.144161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.578  Copying: 238/1024 [MB] (238 MBps) Copying: 470/1024 [MB] (232 MBps) Copying: 705/1024 [MB] (235 MBps) Copying: 953/1024 [MB] (248 MBps) Copying: 1024/1024 [MB] (average 236 MBps) 00:22:29.578 00:22:29.578 Calculate MD5 checksum, iteration 2 00:22:29.578 23:43:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:22:29.578 23:43:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:22:29.578 23:43:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:29.578 23:43:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:29.578 23:43:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:29.578 23:43:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:29.578 23:43:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:29.578 23:43:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:29.578 [2024-09-28 23:43:17.565381] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:29.578 [2024-09-28 23:43:17.565953] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78121 ] 00:22:29.578 [2024-09-28 23:43:17.714400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.836 [2024-09-28 23:43:17.914171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.088  Copying: 632/1024 [MB] (632 MBps) Copying: 1024/1024 [MB] (average 637 MBps) 00:22:33.088 00:22:33.088 23:43:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:22:33.088 23:43:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:35.631 23:43:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:35.631 23:43:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=12d69bb757e3e96df2c10e9526d370a8 00:22:35.631 23:43:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:35.631 23:43:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:35.631 23:43:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:35.631 [2024-09-28 23:43:23.370959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.631 [2024-09-28 23:43:23.371003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:35.631 [2024-09-28 23:43:23.371014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:35.631 [2024-09-28 23:43:23.371024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.631 [2024-09-28 23:43:23.371043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.631 [2024-09-28 23:43:23.371049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:35.631 [2024-09-28 23:43:23.371056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:35.631 [2024-09-28 23:43:23.371061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.631 [2024-09-28 23:43:23.371077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.631 [2024-09-28 23:43:23.371084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:35.631 [2024-09-28 23:43:23.371091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:35.631 [2024-09-28 23:43:23.371096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.631 [2024-09-28 23:43:23.371145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.180 ms, result 0 00:22:35.631 true 00:22:35.631 23:43:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:35.631 { 00:22:35.631 "name": "ftl", 00:22:35.631 "properties": [ 00:22:35.631 { 00:22:35.631 "name": "superblock_version", 00:22:35.631 "value": 5, 00:22:35.631 "read-only": true 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "name": "base_device", 00:22:35.631 "bands": [ 00:22:35.631 { 00:22:35.631 "id": 0, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 1, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 2, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 3, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 4, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 5, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 6, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 7, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 8, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 9, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 10, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 11, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 12, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 13, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 14, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 15, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 16, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 17, 00:22:35.631 "state": "FREE", 00:22:35.631 "validity": 0.0 00:22:35.631 } 00:22:35.631 ], 00:22:35.631 "read-only": true 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "name": "cache_device", 00:22:35.631 "type": "bdev", 00:22:35.631 "chunks": [ 00:22:35.631 { 00:22:35.631 "id": 0, 00:22:35.631 "state": "INACTIVE", 00:22:35.631 "utilization": 0.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 1, 00:22:35.631 "state": "CLOSED", 00:22:35.631 "utilization": 1.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 2, 00:22:35.631 "state": "CLOSED", 00:22:35.631 "utilization": 1.0 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 3, 00:22:35.631 "state": "OPEN", 00:22:35.631 "utilization": 0.001953125 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "id": 4, 00:22:35.631 "state": "OPEN", 00:22:35.631 "utilization": 0.0 00:22:35.631 } 00:22:35.631 ], 00:22:35.631 "read-only": true 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "name": "verbose_mode", 00:22:35.631 "value": true, 00:22:35.631 "unit": "", 00:22:35.631 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:35.631 }, 00:22:35.631 { 00:22:35.631 "name": "prep_upgrade_on_shutdown", 00:22:35.631 "value": false, 00:22:35.631 "unit": "", 00:22:35.631 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:35.631 } 00:22:35.631 ] 00:22:35.631 } 00:22:35.631 23:43:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:22:35.631 [2024-09-28 23:43:23.781999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.631 [2024-09-28 23:43:23.782036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:35.631 [2024-09-28 23:43:23.782045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:35.631 [2024-09-28 23:43:23.782051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.631 [2024-09-28 23:43:23.782068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.631 [2024-09-28 23:43:23.782075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:35.631 [2024-09-28 23:43:23.782081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:35.631 [2024-09-28 23:43:23.782086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.631 [2024-09-28 23:43:23.782101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.631 [2024-09-28 23:43:23.782107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:35.631 [2024-09-28 23:43:23.782113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:35.631 [2024-09-28 23:43:23.782118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.631 [2024-09-28 23:43:23.782160] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.155 ms, result 0 00:22:35.631 true 00:22:35.893 23:43:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:22:35.893 23:43:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:35.893 23:43:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:35.893 23:43:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:22:35.893 23:43:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:22:35.893 23:43:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:36.155 [2024-09-28 23:43:24.190323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.155 [2024-09-28 23:43:24.190358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:36.155 [2024-09-28 23:43:24.190366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:36.155 [2024-09-28 23:43:24.190371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.155 [2024-09-28 23:43:24.190388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.155 [2024-09-28 23:43:24.190394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:36.155 [2024-09-28 23:43:24.190400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:36.155 [2024-09-28 23:43:24.190406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.155 [2024-09-28 23:43:24.190420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.155 [2024-09-28 23:43:24.190426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:36.155 [2024-09-28 23:43:24.190431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:36.155 [2024-09-28 23:43:24.190436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.155 [2024-09-28 23:43:24.190479] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.147 ms, result 0 00:22:36.155 true 00:22:36.155 23:43:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:36.416 { 00:22:36.416 "name": "ftl", 00:22:36.416 "properties": [ 00:22:36.416 { 00:22:36.416 "name": "superblock_version", 00:22:36.416 "value": 5, 00:22:36.416 "read-only": true 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "name": "base_device", 00:22:36.416 "bands": [ 00:22:36.416 { 00:22:36.416 "id": 0, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 1, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 2, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 3, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 4, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 5, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 6, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 7, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 8, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 9, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 10, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 11, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 12, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 13, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 14, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 15, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 16, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 17, 00:22:36.416 "state": "FREE", 00:22:36.416 "validity": 0.0 00:22:36.416 } 00:22:36.416 ], 00:22:36.416 "read-only": true 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "name": "cache_device", 00:22:36.416 "type": "bdev", 00:22:36.416 "chunks": [ 00:22:36.416 { 00:22:36.416 "id": 0, 00:22:36.416 "state": "INACTIVE", 00:22:36.416 "utilization": 0.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 1, 00:22:36.416 "state": "CLOSED", 00:22:36.416 "utilization": 1.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 2, 00:22:36.416 "state": "CLOSED", 00:22:36.416 "utilization": 1.0 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 3, 00:22:36.416 "state": "OPEN", 00:22:36.416 "utilization": 0.001953125 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "id": 4, 00:22:36.416 "state": "OPEN", 00:22:36.416 "utilization": 0.0 00:22:36.416 } 00:22:36.416 ], 00:22:36.416 "read-only": true 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "name": "verbose_mode", 00:22:36.416 "value": true, 00:22:36.416 "unit": "", 00:22:36.416 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:36.416 }, 00:22:36.416 { 00:22:36.416 "name": "prep_upgrade_on_shutdown", 00:22:36.416 "value": true, 00:22:36.416 "unit": "", 00:22:36.416 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:36.416 } 00:22:36.416 ] 00:22:36.416 } 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 77780 ]] 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 77780 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77780 ']' 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77780 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77780 00:22:36.416 killing process with pid 77780 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77780' 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77780 00:22:36.416 23:43:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77780 00:22:36.988 [2024-09-28 23:43:24.948334] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:22:36.988 [2024-09-28 23:43:24.960805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.988 [2024-09-28 23:43:24.960837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:22:36.988 [2024-09-28 23:43:24.960847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:36.988 [2024-09-28 23:43:24.960854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.988 [2024-09-28 23:43:24.960869] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:22:36.988 [2024-09-28 23:43:24.962938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.988 [2024-09-28 23:43:24.962968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:22:36.988 [2024-09-28 23:43:24.962976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.058 ms 00:22:36.988 [2024-09-28 23:43:24.962982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.010346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.010401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:22:45.134 [2024-09-28 23:43:33.010413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8047.320 ms 00:22:45.134 [2024-09-28 23:43:33.010420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.011366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.011384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:22:45.134 [2024-09-28 23:43:33.011392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.934 ms 00:22:45.134 [2024-09-28 23:43:33.011399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.012272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.012286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:22:45.134 [2024-09-28 23:43:33.012294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.849 ms 00:22:45.134 [2024-09-28 23:43:33.012301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.019664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.019694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:22:45.134 [2024-09-28 23:43:33.019701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.328 ms 00:22:45.134 [2024-09-28 23:43:33.019707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.024283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.024312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:22:45.134 [2024-09-28 23:43:33.024321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.551 ms 00:22:45.134 [2024-09-28 23:43:33.024332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.024387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.024395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:22:45.134 [2024-09-28 23:43:33.024402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:22:45.134 [2024-09-28 23:43:33.024414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.031529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.031556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:22:45.134 [2024-09-28 23:43:33.031563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.104 ms 00:22:45.134 [2024-09-28 23:43:33.031569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.038884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.038910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:22:45.134 [2024-09-28 23:43:33.038917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.290 ms 00:22:45.134 [2024-09-28 23:43:33.038922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.046060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.046178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:22:45.134 [2024-09-28 23:43:33.046190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.112 ms 00:22:45.134 [2024-09-28 23:43:33.046196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.053298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.053401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:22:45.134 [2024-09-28 23:43:33.053413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.050 ms 00:22:45.134 [2024-09-28 23:43:33.053418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.053441] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:22:45.134 [2024-09-28 23:43:33.053451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:45.134 [2024-09-28 23:43:33.053458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:22:45.134 [2024-09-28 23:43:33.053464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:22:45.134 [2024-09-28 23:43:33.053471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:45.134 [2024-09-28 23:43:33.053587] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:22:45.134 [2024-09-28 23:43:33.053592] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 50b9279f-27fa-4b89-b5a0-681976440219 00:22:45.134 [2024-09-28 23:43:33.053598] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:22:45.134 [2024-09-28 23:43:33.053605] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:22:45.134 [2024-09-28 23:43:33.053611] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:22:45.134 [2024-09-28 23:43:33.053619] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:22:45.134 [2024-09-28 23:43:33.053625] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:22:45.134 [2024-09-28 23:43:33.053630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:22:45.134 [2024-09-28 23:43:33.053635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:22:45.134 [2024-09-28 23:43:33.053640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:22:45.134 [2024-09-28 23:43:33.053645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:22:45.134 [2024-09-28 23:43:33.053651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.053657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:22:45.134 [2024-09-28 23:43:33.053664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:22:45.134 [2024-09-28 23:43:33.053669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.063270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.063296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:22:45.134 [2024-09-28 23:43:33.063304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.587 ms 00:22:45.134 [2024-09-28 23:43:33.063309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.063600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.134 [2024-09-28 23:43:33.063612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:22:45.134 [2024-09-28 23:43:33.063619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:22:45.134 [2024-09-28 23:43:33.063630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.092726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.134 [2024-09-28 23:43:33.092753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:45.134 [2024-09-28 23:43:33.092761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.134 [2024-09-28 23:43:33.092767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.092787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.134 [2024-09-28 23:43:33.092793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:45.134 [2024-09-28 23:43:33.092799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.134 [2024-09-28 23:43:33.092805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.092853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.134 [2024-09-28 23:43:33.092860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:45.134 [2024-09-28 23:43:33.092866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.134 [2024-09-28 23:43:33.092872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.092885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.134 [2024-09-28 23:43:33.092892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:45.134 [2024-09-28 23:43:33.092897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.134 [2024-09-28 23:43:33.092903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.134 [2024-09-28 23:43:33.153595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.134 [2024-09-28 23:43:33.153630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:45.134 [2024-09-28 23:43:33.153639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.135 [2024-09-28 23:43:33.153645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.135 [2024-09-28 23:43:33.202922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.135 [2024-09-28 23:43:33.203074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:45.135 [2024-09-28 23:43:33.203086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.135 [2024-09-28 23:43:33.203093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.135 [2024-09-28 23:43:33.203158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.135 [2024-09-28 23:43:33.203170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:45.135 [2024-09-28 23:43:33.203176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.135 [2024-09-28 23:43:33.203182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.135 [2024-09-28 23:43:33.203214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.135 [2024-09-28 23:43:33.203221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:45.135 [2024-09-28 23:43:33.203227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.135 [2024-09-28 23:43:33.203233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.135 [2024-09-28 23:43:33.203300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.135 [2024-09-28 23:43:33.203310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:45.135 [2024-09-28 23:43:33.203316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.135 [2024-09-28 23:43:33.203322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.135 [2024-09-28 23:43:33.203343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.135 [2024-09-28 23:43:33.203351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:22:45.135 [2024-09-28 23:43:33.203357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.135 [2024-09-28 23:43:33.203362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.135 [2024-09-28 23:43:33.203394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.135 [2024-09-28 23:43:33.203402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:45.135 [2024-09-28 23:43:33.203409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.135 [2024-09-28 23:43:33.203415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.135 [2024-09-28 23:43:33.203448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:45.135 [2024-09-28 23:43:33.203455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:45.135 [2024-09-28 23:43:33.203461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:45.135 [2024-09-28 23:43:33.203467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.135 [2024-09-28 23:43:33.203570] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8242.726 ms, result 0 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78310 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78310 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78310 ']' 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.316 23:43:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:49.316 [2024-09-28 23:43:37.354983] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:49.317 [2024-09-28 23:43:37.355106] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78310 ] 00:22:49.575 [2024-09-28 23:43:37.503469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.575 [2024-09-28 23:43:37.646188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.142 [2024-09-28 23:43:38.219362] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:50.142 [2024-09-28 23:43:38.219414] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:50.401 [2024-09-28 23:43:38.362271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.401 [2024-09-28 23:43:38.362309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:50.401 [2024-09-28 23:43:38.362319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:50.401 [2024-09-28 23:43:38.362325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.401 [2024-09-28 23:43:38.362360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.401 [2024-09-28 23:43:38.362368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:50.401 [2024-09-28 23:43:38.362375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:22:50.401 [2024-09-28 23:43:38.362381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.401 [2024-09-28 23:43:38.362399] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:50.401 [2024-09-28 23:43:38.362991] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:50.401 [2024-09-28 23:43:38.363005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.401 [2024-09-28 23:43:38.363011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:50.401 [2024-09-28 23:43:38.363018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.614 ms 00:22:50.401 [2024-09-28 23:43:38.363025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.401 [2024-09-28 23:43:38.363980] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:22:50.401 [2024-09-28 23:43:38.373378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.401 [2024-09-28 23:43:38.373407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:22:50.401 [2024-09-28 23:43:38.373416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.399 ms 00:22:50.401 [2024-09-28 23:43:38.373422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.401 [2024-09-28 23:43:38.373468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.401 [2024-09-28 23:43:38.373476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:22:50.401 [2024-09-28 23:43:38.373482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:22:50.401 [2024-09-28 23:43:38.373488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.401 [2024-09-28 23:43:38.377952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.401 [2024-09-28 23:43:38.377978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:50.401 [2024-09-28 23:43:38.377985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.400 ms 00:22:50.402 [2024-09-28 23:43:38.377991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.402 [2024-09-28 23:43:38.378037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.402 [2024-09-28 23:43:38.378044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:50.402 [2024-09-28 23:43:38.378052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:22:50.402 [2024-09-28 23:43:38.378058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.402 [2024-09-28 23:43:38.378092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.402 [2024-09-28 23:43:38.378099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:50.402 [2024-09-28 23:43:38.378105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:50.402 [2024-09-28 23:43:38.378110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.402 [2024-09-28 23:43:38.378126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:50.402 [2024-09-28 23:43:38.380768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.402 [2024-09-28 23:43:38.380888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:50.402 [2024-09-28 23:43:38.380901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.645 ms 00:22:50.402 [2024-09-28 23:43:38.380907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.402 [2024-09-28 23:43:38.380931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.402 [2024-09-28 23:43:38.380938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:50.402 [2024-09-28 23:43:38.380948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:50.402 [2024-09-28 23:43:38.380954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.402 [2024-09-28 23:43:38.380970] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:22:50.402 [2024-09-28 23:43:38.380984] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:22:50.402 [2024-09-28 23:43:38.381011] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:22:50.402 [2024-09-28 23:43:38.381022] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:22:50.402 [2024-09-28 23:43:38.381100] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:50.402 [2024-09-28 23:43:38.381110] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:50.402 [2024-09-28 23:43:38.381118] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:50.402 [2024-09-28 23:43:38.381126] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:50.402 [2024-09-28 23:43:38.381133] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:50.402 [2024-09-28 23:43:38.381139] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:50.402 [2024-09-28 23:43:38.381145] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:50.402 [2024-09-28 23:43:38.381150] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:50.402 [2024-09-28 23:43:38.381155] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:50.402 [2024-09-28 23:43:38.381161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.402 [2024-09-28 23:43:38.381167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:50.402 [2024-09-28 23:43:38.381173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:22:50.402 [2024-09-28 23:43:38.381180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.402 [2024-09-28 23:43:38.381244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.402 [2024-09-28 23:43:38.381250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:50.402 [2024-09-28 23:43:38.381256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:22:50.402 [2024-09-28 23:43:38.381261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.402 [2024-09-28 23:43:38.381336] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:50.402 [2024-09-28 23:43:38.381343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:50.402 [2024-09-28 23:43:38.381349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:50.402 [2024-09-28 23:43:38.381355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:50.402 [2024-09-28 23:43:38.381367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:50.402 [2024-09-28 23:43:38.381377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:50.402 [2024-09-28 23:43:38.381383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:50.402 [2024-09-28 23:43:38.381388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:50.402 [2024-09-28 23:43:38.381399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:50.402 [2024-09-28 23:43:38.381404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:50.402 [2024-09-28 23:43:38.381414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:50.402 [2024-09-28 23:43:38.381419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:50.402 [2024-09-28 23:43:38.381431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:50.402 [2024-09-28 23:43:38.381436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:50.402 [2024-09-28 23:43:38.381446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:50.402 [2024-09-28 23:43:38.381451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:50.402 [2024-09-28 23:43:38.381456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:50.402 [2024-09-28 23:43:38.381461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:50.402 [2024-09-28 23:43:38.381470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:50.402 [2024-09-28 23:43:38.381475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:50.402 [2024-09-28 23:43:38.381479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:50.402 [2024-09-28 23:43:38.381484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:50.402 [2024-09-28 23:43:38.381489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:50.402 [2024-09-28 23:43:38.381493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:50.402 [2024-09-28 23:43:38.381498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:50.402 [2024-09-28 23:43:38.381503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:50.402 [2024-09-28 23:43:38.381523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:50.402 [2024-09-28 23:43:38.381528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:50.402 [2024-09-28 23:43:38.381538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:50.402 [2024-09-28 23:43:38.381543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:50.402 [2024-09-28 23:43:38.381554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:50.402 [2024-09-28 23:43:38.381569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:50.402 [2024-09-28 23:43:38.381574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381578] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:50.402 [2024-09-28 23:43:38.381584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:50.402 [2024-09-28 23:43:38.381590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:50.402 [2024-09-28 23:43:38.381595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:50.402 [2024-09-28 23:43:38.381601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:50.402 [2024-09-28 23:43:38.381607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:50.402 [2024-09-28 23:43:38.381618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:50.402 [2024-09-28 23:43:38.381623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:50.402 [2024-09-28 23:43:38.381628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:50.402 [2024-09-28 23:43:38.381633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:50.402 [2024-09-28 23:43:38.381639] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:50.402 [2024-09-28 23:43:38.381645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.402 [2024-09-28 23:43:38.381652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:50.402 [2024-09-28 23:43:38.381657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:50.402 [2024-09-28 23:43:38.381663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:50.402 [2024-09-28 23:43:38.381668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:50.402 [2024-09-28 23:43:38.381673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:50.402 [2024-09-28 23:43:38.381679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:50.402 [2024-09-28 23:43:38.381684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:50.402 [2024-09-28 23:43:38.381690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:50.402 [2024-09-28 23:43:38.381695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:50.403 [2024-09-28 23:43:38.381700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:50.403 [2024-09-28 23:43:38.381706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:50.403 [2024-09-28 23:43:38.381711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:50.403 [2024-09-28 23:43:38.381716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:50.403 [2024-09-28 23:43:38.381722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:50.403 [2024-09-28 23:43:38.381727] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:50.403 [2024-09-28 23:43:38.381733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.403 [2024-09-28 23:43:38.381739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:50.403 [2024-09-28 23:43:38.381744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:50.403 [2024-09-28 23:43:38.381750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:50.403 [2024-09-28 23:43:38.381755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:50.403 [2024-09-28 23:43:38.381761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.403 [2024-09-28 23:43:38.381766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:50.403 [2024-09-28 23:43:38.381773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.477 ms 00:22:50.403 [2024-09-28 23:43:38.381779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.403 [2024-09-28 23:43:38.381811] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:22:50.403 [2024-09-28 23:43:38.381820] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:22:53.690 [2024-09-28 23:43:41.501540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.501601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:22:53.690 [2024-09-28 23:43:41.501615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3119.718 ms 00:22:53.690 [2024-09-28 23:43:41.501629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.526896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.526940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:53.690 [2024-09-28 23:43:41.526953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.064 ms 00:22:53.690 [2024-09-28 23:43:41.526961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.527035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.527045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:53.690 [2024-09-28 23:43:41.527055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:22:53.690 [2024-09-28 23:43:41.527062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.566677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.566845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:53.690 [2024-09-28 23:43:41.566864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.577 ms 00:22:53.690 [2024-09-28 23:43:41.566880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.566918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.566927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:53.690 [2024-09-28 23:43:41.566936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:53.690 [2024-09-28 23:43:41.566943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.567321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.567346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:53.690 [2024-09-28 23:43:41.567355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.307 ms 00:22:53.690 [2024-09-28 23:43:41.567362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.567400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.567408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:53.690 [2024-09-28 23:43:41.567416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:22:53.690 [2024-09-28 23:43:41.567423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.581001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.581033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:53.690 [2024-09-28 23:43:41.581043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.556 ms 00:22:53.690 [2024-09-28 23:43:41.581050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.593910] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:53.690 [2024-09-28 23:43:41.593945] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:22:53.690 [2024-09-28 23:43:41.593957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.593964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:22:53.690 [2024-09-28 23:43:41.593973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.804 ms 00:22:53.690 [2024-09-28 23:43:41.593980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.607681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.607711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:22:53.690 [2024-09-28 23:43:41.607721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.662 ms 00:22:53.690 [2024-09-28 23:43:41.607729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.619091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.619120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:22:53.690 [2024-09-28 23:43:41.619130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.319 ms 00:22:53.690 [2024-09-28 23:43:41.619138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.630302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.630331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:22:53.690 [2024-09-28 23:43:41.630341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.129 ms 00:22:53.690 [2024-09-28 23:43:41.630348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.630985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.690 [2024-09-28 23:43:41.631010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:53.690 [2024-09-28 23:43:41.631019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.548 ms 00:22:53.690 [2024-09-28 23:43:41.631026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.690 [2024-09-28 23:43:41.685869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.691 [2024-09-28 23:43:41.685914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:22:53.691 [2024-09-28 23:43:41.685926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.824 ms 00:22:53.691 [2024-09-28 23:43:41.685934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.691 [2024-09-28 23:43:41.696208] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:53.691 [2024-09-28 23:43:41.696879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.691 [2024-09-28 23:43:41.696907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:53.691 [2024-09-28 23:43:41.696921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.904 ms 00:22:53.691 [2024-09-28 23:43:41.696928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.691 [2024-09-28 23:43:41.697005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.691 [2024-09-28 23:43:41.697015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:22:53.691 [2024-09-28 23:43:41.697024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:22:53.691 [2024-09-28 23:43:41.697031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.691 [2024-09-28 23:43:41.697083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.691 [2024-09-28 23:43:41.697093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:53.691 [2024-09-28 23:43:41.697102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:22:53.691 [2024-09-28 23:43:41.697112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.691 [2024-09-28 23:43:41.697133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.691 [2024-09-28 23:43:41.697141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:53.691 [2024-09-28 23:43:41.697148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:53.691 [2024-09-28 23:43:41.697156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.691 [2024-09-28 23:43:41.697186] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:22:53.691 [2024-09-28 23:43:41.697196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.691 [2024-09-28 23:43:41.697203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:22:53.691 [2024-09-28 23:43:41.697211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:22:53.691 [2024-09-28 23:43:41.697218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.691 [2024-09-28 23:43:41.720415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.691 [2024-09-28 23:43:41.720564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:53.691 [2024-09-28 23:43:41.720580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.176 ms 00:22:53.691 [2024-09-28 23:43:41.720589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.691 [2024-09-28 23:43:41.720796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.691 [2024-09-28 23:43:41.720812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:53.691 [2024-09-28 23:43:41.720822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:22:53.691 [2024-09-28 23:43:41.720831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.691 [2024-09-28 23:43:41.721797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3359.081 ms, result 0 00:22:53.691 [2024-09-28 23:43:41.737012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.691 [2024-09-28 23:43:41.752996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:53.691 [2024-09-28 23:43:41.761099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:53.691 23:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.691 23:43:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:53.691 23:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:53.691 23:43:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:22:53.691 23:43:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:53.950 [2024-09-28 23:43:41.985134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.950 [2024-09-28 23:43:41.985173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:53.950 [2024-09-28 23:43:41.985185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:53.950 [2024-09-28 23:43:41.985193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.950 [2024-09-28 23:43:41.985214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.950 [2024-09-28 23:43:41.985223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:53.950 [2024-09-28 23:43:41.985231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:53.950 [2024-09-28 23:43:41.985238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.950 [2024-09-28 23:43:41.985257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:53.950 [2024-09-28 23:43:41.985268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:53.950 [2024-09-28 23:43:41.985276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:53.950 [2024-09-28 23:43:41.985283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:53.950 [2024-09-28 23:43:41.985336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.194 ms, result 0 00:22:53.950 true 00:22:53.950 23:43:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:54.209 { 00:22:54.209 "name": "ftl", 00:22:54.209 "properties": [ 00:22:54.209 { 00:22:54.209 "name": "superblock_version", 00:22:54.209 "value": 5, 00:22:54.209 "read-only": true 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "name": "base_device", 00:22:54.209 "bands": [ 00:22:54.209 { 00:22:54.209 "id": 0, 00:22:54.209 "state": "CLOSED", 00:22:54.209 "validity": 1.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 1, 00:22:54.209 "state": "CLOSED", 00:22:54.209 "validity": 1.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 2, 00:22:54.209 "state": "CLOSED", 00:22:54.209 "validity": 0.007843137254901933 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 3, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 4, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 5, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 6, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 7, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 8, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 9, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 10, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 11, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 12, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 13, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 14, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 15, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 16, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 17, 00:22:54.209 "state": "FREE", 00:22:54.209 "validity": 0.0 00:22:54.209 } 00:22:54.209 ], 00:22:54.209 "read-only": true 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "name": "cache_device", 00:22:54.209 "type": "bdev", 00:22:54.209 "chunks": [ 00:22:54.209 { 00:22:54.209 "id": 0, 00:22:54.209 "state": "INACTIVE", 00:22:54.209 "utilization": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 1, 00:22:54.209 "state": "OPEN", 00:22:54.209 "utilization": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 2, 00:22:54.209 "state": "OPEN", 00:22:54.209 "utilization": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 3, 00:22:54.209 "state": "FREE", 00:22:54.209 "utilization": 0.0 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "id": 4, 00:22:54.209 "state": "FREE", 00:22:54.209 "utilization": 0.0 00:22:54.209 } 00:22:54.209 ], 00:22:54.209 "read-only": true 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "name": "verbose_mode", 00:22:54.209 "value": true, 00:22:54.209 "unit": "", 00:22:54.209 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:54.209 }, 00:22:54.209 { 00:22:54.209 "name": "prep_upgrade_on_shutdown", 00:22:54.209 "value": false, 00:22:54.209 "unit": "", 00:22:54.209 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:54.209 } 00:22:54.209 ] 00:22:54.209 } 00:22:54.209 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:22:54.209 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:54.209 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:22:54.471 Validate MD5 checksum, iteration 1 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:54.471 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:54.472 23:43:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:54.730 [2024-09-28 23:43:42.692313] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:54.730 [2024-09-28 23:43:42.692589] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78384 ] 00:22:54.730 [2024-09-28 23:43:42.839843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.991 [2024-09-28 23:43:43.018902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.711  Copying: 548/1024 [MB] (548 MBps) Copying: 1024/1024 [MB] (average 538 MBps) 00:22:58.711 00:22:58.711 23:43:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:22:58.711 23:43:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f90f235d9159d7e34022074dad49f90d 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f90f235d9159d7e34022074dad49f90d != \f\9\0\f\2\3\5\d\9\1\5\9\d\7\e\3\4\0\2\2\0\7\4\d\a\d\4\9\f\9\0\d ]] 00:23:00.629 Validate MD5 checksum, iteration 2 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:00.629 23:43:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:00.891 [2024-09-28 23:43:48.844900] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:00.891 [2024-09-28 23:43:48.845151] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78451 ] 00:23:00.891 [2024-09-28 23:43:48.992967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.150 [2024-09-28 23:43:49.131045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.404  Copying: 658/1024 [MB] (658 MBps) Copying: 1024/1024 [MB] (average 655 MBps) 00:23:06.404 00:23:06.404 23:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:06.404 23:43:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=12d69bb757e3e96df2c10e9526d370a8 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 12d69bb757e3e96df2c10e9526d370a8 != \1\2\d\6\9\b\b\7\5\7\e\3\e\9\6\d\f\2\c\1\0\e\9\5\2\6\d\3\7\0\a\8 ]] 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 78310 ]] 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 78310 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:07.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78529 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78529 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78529 ']' 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.786 23:43:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:07.786 [2024-09-28 23:43:55.736066] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:07.786 [2024-09-28 23:43:55.736298] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78529 ] 00:23:07.786 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 78310 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:23:07.786 [2024-09-28 23:43:55.880127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.045 [2024-09-28 23:43:56.023606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.618 [2024-09-28 23:43:56.596230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:08.618 [2024-09-28 23:43:56.596425] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:08.618 [2024-09-28 23:43:56.739075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.618 [2024-09-28 23:43:56.739195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:08.618 [2024-09-28 23:43:56.739211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:08.618 [2024-09-28 23:43:56.739219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.618 [2024-09-28 23:43:56.739262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.618 [2024-09-28 23:43:56.739271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:08.618 [2024-09-28 23:43:56.739278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:23:08.618 [2024-09-28 23:43:56.739284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.739305] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:08.619 [2024-09-28 23:43:56.739823] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:08.619 [2024-09-28 23:43:56.739836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.619 [2024-09-28 23:43:56.739843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:08.619 [2024-09-28 23:43:56.739850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:23:08.619 [2024-09-28 23:43:56.739858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.740070] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:08.619 [2024-09-28 23:43:56.752626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.619 [2024-09-28 23:43:56.752659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:08.619 [2024-09-28 23:43:56.752672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.556 ms 00:23:08.619 [2024-09-28 23:43:56.752678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.759472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.619 [2024-09-28 23:43:56.759499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:08.619 [2024-09-28 23:43:56.759519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:23:08.619 [2024-09-28 23:43:56.759526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.759782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.619 [2024-09-28 23:43:56.759792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:08.619 [2024-09-28 23:43:56.759799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:23:08.619 [2024-09-28 23:43:56.759805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.759841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.619 [2024-09-28 23:43:56.759848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:08.619 [2024-09-28 23:43:56.759854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:23:08.619 [2024-09-28 23:43:56.759859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.759879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.619 [2024-09-28 23:43:56.759886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:08.619 [2024-09-28 23:43:56.759894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:08.619 [2024-09-28 23:43:56.759899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.759913] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:08.619 [2024-09-28 23:43:56.762220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.619 [2024-09-28 23:43:56.762242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:08.619 [2024-09-28 23:43:56.762249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.309 ms 00:23:08.619 [2024-09-28 23:43:56.762254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.762272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.619 [2024-09-28 23:43:56.762278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:08.619 [2024-09-28 23:43:56.762288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:08.619 [2024-09-28 23:43:56.762293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.762309] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:08.619 [2024-09-28 23:43:56.762323] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:23:08.619 [2024-09-28 23:43:56.762350] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:08.619 [2024-09-28 23:43:56.762362] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:23:08.619 [2024-09-28 23:43:56.762439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:08.619 [2024-09-28 23:43:56.762448] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:08.619 [2024-09-28 23:43:56.762456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:08.619 [2024-09-28 23:43:56.762463] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:08.619 [2024-09-28 23:43:56.762470] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:08.619 [2024-09-28 23:43:56.762476] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:08.619 [2024-09-28 23:43:56.762484] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:08.619 [2024-09-28 23:43:56.762490] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:08.619 [2024-09-28 23:43:56.762495] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:08.619 [2024-09-28 23:43:56.762501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.619 [2024-09-28 23:43:56.762520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:08.619 [2024-09-28 23:43:56.762527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:23:08.619 [2024-09-28 23:43:56.762532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.762597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.619 [2024-09-28 23:43:56.762603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:08.619 [2024-09-28 23:43:56.762609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:23:08.619 [2024-09-28 23:43:56.762616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.619 [2024-09-28 23:43:56.762690] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:08.619 [2024-09-28 23:43:56.762698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:08.619 [2024-09-28 23:43:56.762704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:08.619 [2024-09-28 23:43:56.762709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:08.619 [2024-09-28 23:43:56.762715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:08.619 [2024-09-28 23:43:56.762720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:08.619 [2024-09-28 23:43:56.762726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:08.619 [2024-09-28 23:43:56.762731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:08.619 [2024-09-28 23:43:56.762736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:08.619 [2024-09-28 23:43:56.762741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:08.619 [2024-09-28 23:43:56.762746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:08.619 [2024-09-28 23:43:56.762751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:08.619 [2024-09-28 23:43:56.762755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:08.619 [2024-09-28 23:43:56.762763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:08.619 [2024-09-28 23:43:56.762768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:08.619 [2024-09-28 23:43:56.762773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:08.619 [2024-09-28 23:43:56.762778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:08.619 [2024-09-28 23:43:56.762783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:08.619 [2024-09-28 23:43:56.762788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:08.619 [2024-09-28 23:43:56.762793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:08.619 [2024-09-28 23:43:56.762798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:08.619 [2024-09-28 23:43:56.762803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:08.619 [2024-09-28 23:43:56.762812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:08.619 [2024-09-28 23:43:56.762817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:08.619 [2024-09-28 23:43:56.762822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:08.619 [2024-09-28 23:43:56.762827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:08.619 [2024-09-28 23:43:56.762832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:08.619 [2024-09-28 23:43:56.762836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:08.619 [2024-09-28 23:43:56.762841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:08.619 [2024-09-28 23:43:56.762846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:08.619 [2024-09-28 23:43:56.762851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:08.619 [2024-09-28 23:43:56.762856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:08.619 [2024-09-28 23:43:56.762860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:08.619 [2024-09-28 23:43:56.762865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:08.619 [2024-09-28 23:43:56.762870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:08.619 [2024-09-28 23:43:56.762874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:08.619 [2024-09-28 23:43:56.762879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:08.619 [2024-09-28 23:43:56.762899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:08.620 [2024-09-28 23:43:56.762904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:08.620 [2024-09-28 23:43:56.762909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:08.620 [2024-09-28 23:43:56.762914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:08.620 [2024-09-28 23:43:56.762919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:08.620 [2024-09-28 23:43:56.762923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:08.620 [2024-09-28 23:43:56.762928] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:08.620 [2024-09-28 23:43:56.762934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:08.620 [2024-09-28 23:43:56.762940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:08.620 [2024-09-28 23:43:56.762946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:08.620 [2024-09-28 23:43:56.762955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:08.620 [2024-09-28 23:43:56.762961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:08.620 [2024-09-28 23:43:56.762966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:08.620 [2024-09-28 23:43:56.762971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:08.620 [2024-09-28 23:43:56.762976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:08.620 [2024-09-28 23:43:56.762981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:08.620 [2024-09-28 23:43:56.762987] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:08.620 [2024-09-28 23:43:56.762994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:08.620 [2024-09-28 23:43:56.763005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:08.620 [2024-09-28 23:43:56.763022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:08.620 [2024-09-28 23:43:56.763027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:08.620 [2024-09-28 23:43:56.763032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:08.620 [2024-09-28 23:43:56.763037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:08.620 [2024-09-28 23:43:56.763075] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:08.620 [2024-09-28 23:43:56.763081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:08.620 [2024-09-28 23:43:56.763092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:08.620 [2024-09-28 23:43:56.763097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:08.620 [2024-09-28 23:43:56.763102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:08.620 [2024-09-28 23:43:56.763107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.620 [2024-09-28 23:43:56.763113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:08.620 [2024-09-28 23:43:56.763119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.469 ms 00:23:08.620 [2024-09-28 23:43:56.763124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.620 [2024-09-28 23:43:56.782294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.620 [2024-09-28 23:43:56.782320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:08.620 [2024-09-28 23:43:56.782328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.133 ms 00:23:08.620 [2024-09-28 23:43:56.782335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.620 [2024-09-28 23:43:56.782363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.620 [2024-09-28 23:43:56.782370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:08.620 [2024-09-28 23:43:56.782379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:23:08.620 [2024-09-28 23:43:56.782385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.881 [2024-09-28 23:43:56.823491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.881 [2024-09-28 23:43:56.823529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:08.881 [2024-09-28 23:43:56.823539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.064 ms 00:23:08.881 [2024-09-28 23:43:56.823545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.881 [2024-09-28 23:43:56.823575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.881 [2024-09-28 23:43:56.823582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:08.881 [2024-09-28 23:43:56.823588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:08.881 [2024-09-28 23:43:56.823594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.881 [2024-09-28 23:43:56.823673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.881 [2024-09-28 23:43:56.823739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:08.881 [2024-09-28 23:43:56.823746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:23:08.881 [2024-09-28 23:43:56.823752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.881 [2024-09-28 23:43:56.823782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.881 [2024-09-28 23:43:56.823792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:08.881 [2024-09-28 23:43:56.823798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:23:08.881 [2024-09-28 23:43:56.823807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.881 [2024-09-28 23:43:56.834646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.881 [2024-09-28 23:43:56.834671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:08.881 [2024-09-28 23:43:56.834680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.824 ms 00:23:08.881 [2024-09-28 23:43:56.834685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.881 [2024-09-28 23:43:56.834770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.881 [2024-09-28 23:43:56.834783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:23:08.881 [2024-09-28 23:43:56.834790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:08.881 [2024-09-28 23:43:56.834796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.881 [2024-09-28 23:43:56.848170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.881 [2024-09-28 23:43:56.848199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:23:08.881 [2024-09-28 23:43:56.848207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.358 ms 00:23:08.881 [2024-09-28 23:43:56.848216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.881 [2024-09-28 23:43:56.855271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.881 [2024-09-28 23:43:56.855305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:08.881 [2024-09-28 23:43:56.855312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.396 ms 00:23:08.882 [2024-09-28 23:43:56.855318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.882 [2024-09-28 23:43:56.898694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.882 [2024-09-28 23:43:56.898729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:08.882 [2024-09-28 23:43:56.898738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.336 ms 00:23:08.882 [2024-09-28 23:43:56.898744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.882 [2024-09-28 23:43:56.898853] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:23:08.882 [2024-09-28 23:43:56.898933] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:23:08.882 [2024-09-28 23:43:56.899003] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:23:08.882 [2024-09-28 23:43:56.899074] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:23:08.882 [2024-09-28 23:43:56.899084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.882 [2024-09-28 23:43:56.899090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:23:08.882 [2024-09-28 23:43:56.899098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:23:08.882 [2024-09-28 23:43:56.899104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.882 [2024-09-28 23:43:56.899146] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:23:08.882 [2024-09-28 23:43:56.899154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.882 [2024-09-28 23:43:56.899160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:23:08.882 [2024-09-28 23:43:56.899167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:08.882 [2024-09-28 23:43:56.899172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.882 [2024-09-28 23:43:56.910642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.882 [2024-09-28 23:43:56.910671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:23:08.882 [2024-09-28 23:43:56.910679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.453 ms 00:23:08.882 [2024-09-28 23:43:56.910686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.882 [2024-09-28 23:43:56.917269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.882 [2024-09-28 23:43:56.917294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:23:08.882 [2024-09-28 23:43:56.917302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:08.882 [2024-09-28 23:43:56.917311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:08.882 [2024-09-28 23:43:56.917369] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:23:08.882 [2024-09-28 23:43:56.917482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:08.882 [2024-09-28 23:43:56.917497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:23:08.882 [2024-09-28 23:43:56.917504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.114 ms 00:23:08.882 [2024-09-28 23:43:56.917519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.454 [2024-09-28 23:43:57.345744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.454 [2024-09-28 23:43:57.345786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:23:09.455 [2024-09-28 23:43:57.345798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 427.565 ms 00:23:09.455 [2024-09-28 23:43:57.345805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.455 [2024-09-28 23:43:57.349300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.455 [2024-09-28 23:43:57.349333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:23:09.455 [2024-09-28 23:43:57.349342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.921 ms 00:23:09.455 [2024-09-28 23:43:57.349348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.455 [2024-09-28 23:43:57.349649] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:23:09.455 [2024-09-28 23:43:57.349673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.455 [2024-09-28 23:43:57.349680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:23:09.455 [2024-09-28 23:43:57.349687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:23:09.455 [2024-09-28 23:43:57.349694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.455 [2024-09-28 23:43:57.349721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.455 [2024-09-28 23:43:57.349728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:23:09.455 [2024-09-28 23:43:57.349735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:09.455 [2024-09-28 23:43:57.349741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.455 [2024-09-28 23:43:57.349768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 432.397 ms, result 0 00:23:09.455 [2024-09-28 23:43:57.349797] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:23:09.455 [2024-09-28 23:43:57.349881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.455 [2024-09-28 23:43:57.349894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:23:09.455 [2024-09-28 23:43:57.349901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:23:09.455 [2024-09-28 23:43:57.349907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.716 [2024-09-28 23:43:57.798669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.716 [2024-09-28 23:43:57.798720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:23:09.716 [2024-09-28 23:43:57.798731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 448.021 ms 00:23:09.716 [2024-09-28 23:43:57.798737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.802388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.802417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:23:09.717 [2024-09-28 23:43:57.802424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.087 ms 00:23:09.717 [2024-09-28 23:43:57.802430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.802901] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:23:09.717 [2024-09-28 23:43:57.802938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.802944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:23:09.717 [2024-09-28 23:43:57.802951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.485 ms 00:23:09.717 [2024-09-28 23:43:57.802957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.802981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.802988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:23:09.717 [2024-09-28 23:43:57.802995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:09.717 [2024-09-28 23:43:57.803000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.803026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 453.226 ms, result 0 00:23:09.717 [2024-09-28 23:43:57.803058] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:09.717 [2024-09-28 23:43:57.803066] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:09.717 [2024-09-28 23:43:57.803073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.803079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:23:09.717 [2024-09-28 23:43:57.803088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 885.716 ms 00:23:09.717 [2024-09-28 23:43:57.803094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.803118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.803124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:23:09.717 [2024-09-28 23:43:57.803131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:09.717 [2024-09-28 23:43:57.803137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.811499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:09.717 [2024-09-28 23:43:57.811590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.811599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:09.717 [2024-09-28 23:43:57.811606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.440 ms 00:23:09.717 [2024-09-28 23:43:57.811611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.812124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.812144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:23:09.717 [2024-09-28 23:43:57.812151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.463 ms 00:23:09.717 [2024-09-28 23:43:57.812157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.813832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.813850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:23:09.717 [2024-09-28 23:43:57.813857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.663 ms 00:23:09.717 [2024-09-28 23:43:57.813864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.813894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.813901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:23:09.717 [2024-09-28 23:43:57.813907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:09.717 [2024-09-28 23:43:57.813913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.813989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.813996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:09.717 [2024-09-28 23:43:57.814002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:23:09.717 [2024-09-28 23:43:57.814007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.814023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.814031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:09.717 [2024-09-28 23:43:57.814037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:09.717 [2024-09-28 23:43:57.814043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.814065] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:09.717 [2024-09-28 23:43:57.814073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.814078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:09.717 [2024-09-28 23:43:57.814084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:09.717 [2024-09-28 23:43:57.814090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.814127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.717 [2024-09-28 23:43:57.814136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:09.717 [2024-09-28 23:43:57.814143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:23:09.717 [2024-09-28 23:43:57.814149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.717 [2024-09-28 23:43:57.814973] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1075.552 ms, result 0 00:23:09.717 [2024-09-28 23:43:57.827836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.717 [2024-09-28 23:43:57.843835] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:09.717 [2024-09-28 23:43:57.851921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:10.287 Validate MD5 checksum, iteration 1 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:10.287 23:43:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:10.287 [2024-09-28 23:43:58.246263] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:10.287 [2024-09-28 23:43:58.246354] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78565 ] 00:23:10.287 [2024-09-28 23:43:58.388014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.548 [2024-09-28 23:43:58.532422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.440  Copying: 664/1024 [MB] (664 MBps) Copying: 1024/1024 [MB] (average 655 MBps) 00:23:13.440 00:23:13.440 23:44:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:13.440 23:44:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:15.341 Validate MD5 checksum, iteration 2 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f90f235d9159d7e34022074dad49f90d 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f90f235d9159d7e34022074dad49f90d != \f\9\0\f\2\3\5\d\9\1\5\9\d\7\e\3\4\0\2\2\0\7\4\d\a\d\4\9\f\9\0\d ]] 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:15.341 23:44:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:15.341 [2024-09-28 23:44:03.254107] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:15.341 [2024-09-28 23:44:03.254409] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78626 ] 00:23:15.341 [2024-09-28 23:44:03.401720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.600 [2024-09-28 23:44:03.539644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.753  Copying: 664/1024 [MB] (664 MBps) Copying: 1024/1024 [MB] (average 656 MBps) 00:23:21.753 00:23:21.753 23:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:21.753 23:44:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:23.130 23:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:23.130 23:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=12d69bb757e3e96df2c10e9526d370a8 00:23:23.130 23:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 12d69bb757e3e96df2c10e9526d370a8 != \1\2\d\6\9\b\b\7\5\7\e\3\e\9\6\d\f\2\c\1\0\e\9\5\2\6\d\3\7\0\a\8 ]] 00:23:23.130 23:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:23.130 23:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:23.130 23:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:23.130 23:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:23:23.130 23:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:23:23.130 23:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78529 ]] 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78529 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78529 ']' 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78529 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78529 00:23:23.130 killing process with pid 78529 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78529' 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78529 00:23:23.130 23:44:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78529 00:23:23.704 [2024-09-28 23:44:11.607688] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:23:23.704 [2024-09-28 23:44:11.618784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.618820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:23:23.704 [2024-09-28 23:44:11.618830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:23.704 [2024-09-28 23:44:11.618838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.618855] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:23:23.704 [2024-09-28 23:44:11.620921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.620957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:23:23.704 [2024-09-28 23:44:11.620965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.056 ms 00:23:23.704 [2024-09-28 23:44:11.620971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.621141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.621156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:23:23.704 [2024-09-28 23:44:11.621162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.155 ms 00:23:23.704 [2024-09-28 23:44:11.621168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.624199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.624232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:23:23.704 [2024-09-28 23:44:11.624240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.018 ms 00:23:23.704 [2024-09-28 23:44:11.624246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.625115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.625133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:23:23.704 [2024-09-28 23:44:11.625140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:23:23.704 [2024-09-28 23:44:11.625146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.632446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.632474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:23:23.704 [2024-09-28 23:44:11.632481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.266 ms 00:23:23.704 [2024-09-28 23:44:11.632487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.636628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.636656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:23:23.704 [2024-09-28 23:44:11.636664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.096 ms 00:23:23.704 [2024-09-28 23:44:11.636671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.636738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.636746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:23:23.704 [2024-09-28 23:44:11.636753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:23:23.704 [2024-09-28 23:44:11.636759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.644039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.644064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:23:23.704 [2024-09-28 23:44:11.644071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.269 ms 00:23:23.704 [2024-09-28 23:44:11.644077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.651117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.651143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:23:23.704 [2024-09-28 23:44:11.651150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.017 ms 00:23:23.704 [2024-09-28 23:44:11.651156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.658109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.658134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:23:23.704 [2024-09-28 23:44:11.658142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.930 ms 00:23:23.704 [2024-09-28 23:44:11.658147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.665311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.704 [2024-09-28 23:44:11.665337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:23:23.704 [2024-09-28 23:44:11.665344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.122 ms 00:23:23.704 [2024-09-28 23:44:11.665349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.704 [2024-09-28 23:44:11.665373] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:23:23.704 [2024-09-28 23:44:11.665384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:23.704 [2024-09-28 23:44:11.665391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:23:23.704 [2024-09-28 23:44:11.665397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:23:23.704 [2024-09-28 23:44:11.665403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:23.704 [2024-09-28 23:44:11.665409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:23.704 [2024-09-28 23:44:11.665414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:23.704 [2024-09-28 23:44:11.665420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:23.704 [2024-09-28 23:44:11.665425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:23.704 [2024-09-28 23:44:11.665431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:23.704 [2024-09-28 23:44:11.665437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:23.705 [2024-09-28 23:44:11.665442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:23.705 [2024-09-28 23:44:11.665449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:23.705 [2024-09-28 23:44:11.665454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:23.705 [2024-09-28 23:44:11.665459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:23.705 [2024-09-28 23:44:11.665465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:23.705 [2024-09-28 23:44:11.665471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:23.705 [2024-09-28 23:44:11.665476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:23.705 [2024-09-28 23:44:11.665482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:23.705 [2024-09-28 23:44:11.665488] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:23:23.705 [2024-09-28 23:44:11.665494] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 50b9279f-27fa-4b89-b5a0-681976440219 00:23:23.705 [2024-09-28 23:44:11.665500] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:23:23.705 [2024-09-28 23:44:11.665506] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:23:23.705 [2024-09-28 23:44:11.665520] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:23:23.705 [2024-09-28 23:44:11.665528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:23:23.705 [2024-09-28 23:44:11.665534] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:23:23.705 [2024-09-28 23:44:11.665540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:23:23.705 [2024-09-28 23:44:11.665546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:23:23.705 [2024-09-28 23:44:11.665550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:23:23.705 [2024-09-28 23:44:11.665555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:23:23.705 [2024-09-28 23:44:11.665561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.705 [2024-09-28 23:44:11.665568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:23:23.705 [2024-09-28 23:44:11.665575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:23:23.705 [2024-09-28 23:44:11.665580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.674961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.705 [2024-09-28 23:44:11.674988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:23:23.705 [2024-09-28 23:44:11.674995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.369 ms 00:23:23.705 [2024-09-28 23:44:11.675001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.675266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:23.705 [2024-09-28 23:44:11.675282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:23:23.705 [2024-09-28 23:44:11.675288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:23:23.705 [2024-09-28 23:44:11.675294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.704428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.704455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:23.705 [2024-09-28 23:44:11.704463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.704469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.704490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.704497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:23.705 [2024-09-28 23:44:11.704503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.704516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.704564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.704571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:23.705 [2024-09-28 23:44:11.704580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.704586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.704599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.704605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:23.705 [2024-09-28 23:44:11.704611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.704616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.762869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.762912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:23.705 [2024-09-28 23:44:11.762920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.762926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.810231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.810263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:23.705 [2024-09-28 23:44:11.810271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.810277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.810337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.810345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:23.705 [2024-09-28 23:44:11.810352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.810360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.810392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.810401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:23.705 [2024-09-28 23:44:11.810407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.810412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.810477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.810484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:23.705 [2024-09-28 23:44:11.810490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.810498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.810541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.810548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:23:23.705 [2024-09-28 23:44:11.810554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.810560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.810586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.810593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:23.705 [2024-09-28 23:44:11.810598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.810604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.810638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:23.705 [2024-09-28 23:44:11.810646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:23.705 [2024-09-28 23:44:11.810652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:23.705 [2024-09-28 23:44:11.810657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:23.705 [2024-09-28 23:44:11.810745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 191.937 ms, result 0 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:24.715 Remove shared memory files 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78310 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:23:24.715 00:23:24.715 real 1m21.732s 00:23:24.715 user 1m52.730s 00:23:24.715 sys 0m18.028s 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:24.715 ************************************ 00:23:24.715 END TEST ftl_upgrade_shutdown 00:23:24.715 ************************************ 00:23:24.715 23:44:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:24.715 23:44:12 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:23:24.715 23:44:12 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:24.715 23:44:12 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:23:24.715 23:44:12 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:24.715 23:44:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:24.715 ************************************ 00:23:24.715 START TEST ftl_restore_fast 00:23:24.715 ************************************ 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:24.715 * Looking for test storage... 00:23:24.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.715 --rc genhtml_branch_coverage=1 00:23:24.715 --rc genhtml_function_coverage=1 00:23:24.715 --rc genhtml_legend=1 00:23:24.715 --rc geninfo_all_blocks=1 00:23:24.715 --rc geninfo_unexecuted_blocks=1 00:23:24.715 00:23:24.715 ' 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.715 --rc genhtml_branch_coverage=1 00:23:24.715 --rc genhtml_function_coverage=1 00:23:24.715 --rc genhtml_legend=1 00:23:24.715 --rc geninfo_all_blocks=1 00:23:24.715 --rc geninfo_unexecuted_blocks=1 00:23:24.715 00:23:24.715 ' 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.715 --rc genhtml_branch_coverage=1 00:23:24.715 --rc genhtml_function_coverage=1 00:23:24.715 --rc genhtml_legend=1 00:23:24.715 --rc geninfo_all_blocks=1 00:23:24.715 --rc geninfo_unexecuted_blocks=1 00:23:24.715 00:23:24.715 ' 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:24.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.715 --rc genhtml_branch_coverage=1 00:23:24.715 --rc genhtml_function_coverage=1 00:23:24.715 --rc genhtml_legend=1 00:23:24.715 --rc geninfo_all_blocks=1 00:23:24.715 --rc geninfo_unexecuted_blocks=1 00:23:24.715 00:23:24.715 ' 00:23:24.715 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.qCLM3dGgol 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=78800 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 78800 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 78800 ']' 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:23:24.716 23:44:12 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:24.716 [2024-09-28 23:44:12.841725] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:24.716 [2024-09-28 23:44:12.841853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78800 ] 00:23:24.976 [2024-09-28 23:44:12.990048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.976 [2024-09-28 23:44:13.130425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.543 23:44:13 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.543 23:44:13 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:23:25.543 23:44:13 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:25.543 23:44:13 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:23:25.543 23:44:13 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:25.543 23:44:13 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:23:25.543 23:44:13 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:23:25.543 23:44:13 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:25.803 23:44:13 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:25.803 23:44:13 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:23:25.803 23:44:13 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:25.803 23:44:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:25.803 23:44:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:25.803 23:44:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:25.803 23:44:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:25.803 23:44:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:26.065 { 00:23:26.065 "name": "nvme0n1", 00:23:26.065 "aliases": [ 00:23:26.065 "8881bfa3-31ff-4d1b-aced-a1ad9bfd48bf" 00:23:26.065 ], 00:23:26.065 "product_name": "NVMe disk", 00:23:26.065 "block_size": 4096, 00:23:26.065 "num_blocks": 1310720, 00:23:26.065 "uuid": "8881bfa3-31ff-4d1b-aced-a1ad9bfd48bf", 00:23:26.065 "numa_id": -1, 00:23:26.065 "assigned_rate_limits": { 00:23:26.065 "rw_ios_per_sec": 0, 00:23:26.065 "rw_mbytes_per_sec": 0, 00:23:26.065 "r_mbytes_per_sec": 0, 00:23:26.065 "w_mbytes_per_sec": 0 00:23:26.065 }, 00:23:26.065 "claimed": true, 00:23:26.065 "claim_type": "read_many_write_one", 00:23:26.065 "zoned": false, 00:23:26.065 "supported_io_types": { 00:23:26.065 "read": true, 00:23:26.065 "write": true, 00:23:26.065 "unmap": true, 00:23:26.065 "flush": true, 00:23:26.065 "reset": true, 00:23:26.065 "nvme_admin": true, 00:23:26.065 "nvme_io": true, 00:23:26.065 "nvme_io_md": false, 00:23:26.065 "write_zeroes": true, 00:23:26.065 "zcopy": false, 00:23:26.065 "get_zone_info": false, 00:23:26.065 "zone_management": false, 00:23:26.065 "zone_append": false, 00:23:26.065 "compare": true, 00:23:26.065 "compare_and_write": false, 00:23:26.065 "abort": true, 00:23:26.065 "seek_hole": false, 00:23:26.065 "seek_data": false, 00:23:26.065 "copy": true, 00:23:26.065 "nvme_iov_md": false 00:23:26.065 }, 00:23:26.065 "driver_specific": { 00:23:26.065 "nvme": [ 00:23:26.065 { 00:23:26.065 "pci_address": "0000:00:11.0", 00:23:26.065 "trid": { 00:23:26.065 "trtype": "PCIe", 00:23:26.065 "traddr": "0000:00:11.0" 00:23:26.065 }, 00:23:26.065 "ctrlr_data": { 00:23:26.065 "cntlid": 0, 00:23:26.065 "vendor_id": "0x1b36", 00:23:26.065 "model_number": "QEMU NVMe Ctrl", 00:23:26.065 "serial_number": "12341", 00:23:26.065 "firmware_revision": "8.0.0", 00:23:26.065 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:26.065 "oacs": { 00:23:26.065 "security": 0, 00:23:26.065 "format": 1, 00:23:26.065 "firmware": 0, 00:23:26.065 "ns_manage": 1 00:23:26.065 }, 00:23:26.065 "multi_ctrlr": false, 00:23:26.065 "ana_reporting": false 00:23:26.065 }, 00:23:26.065 "vs": { 00:23:26.065 "nvme_version": "1.4" 00:23:26.065 }, 00:23:26.065 "ns_data": { 00:23:26.065 "id": 1, 00:23:26.065 "can_share": false 00:23:26.065 } 00:23:26.065 } 00:23:26.065 ], 00:23:26.065 "mp_policy": "active_passive" 00:23:26.065 } 00:23:26.065 } 00:23:26.065 ]' 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:26.065 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:26.326 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=19eda7f8-0e65-448b-b6de-6a815cc66f01 00:23:26.327 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:23:26.327 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19eda7f8-0e65-448b-b6de-6a815cc66f01 00:23:26.588 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:26.849 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=9fdfcbc4-b18c-4483-8420-7e85d685f6ea 00:23:26.849 23:44:14 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9fdfcbc4-b18c-4483-8420-7e85d685f6ea 00:23:26.849 23:44:15 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:26.849 23:44:15 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:26.849 23:44:15 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:26.849 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:23:26.849 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:26.849 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:26.849 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:23:27.110 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:27.110 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:27.110 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:27.110 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:27.110 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:27.110 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:27.110 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:27.110 { 00:23:27.110 "name": "b2b1a948-a9bd-482a-9aa1-4485326c3450", 00:23:27.110 "aliases": [ 00:23:27.110 "lvs/nvme0n1p0" 00:23:27.110 ], 00:23:27.110 "product_name": "Logical Volume", 00:23:27.110 "block_size": 4096, 00:23:27.110 "num_blocks": 26476544, 00:23:27.110 "uuid": "b2b1a948-a9bd-482a-9aa1-4485326c3450", 00:23:27.110 "assigned_rate_limits": { 00:23:27.110 "rw_ios_per_sec": 0, 00:23:27.110 "rw_mbytes_per_sec": 0, 00:23:27.110 "r_mbytes_per_sec": 0, 00:23:27.110 "w_mbytes_per_sec": 0 00:23:27.110 }, 00:23:27.110 "claimed": false, 00:23:27.110 "zoned": false, 00:23:27.110 "supported_io_types": { 00:23:27.110 "read": true, 00:23:27.110 "write": true, 00:23:27.110 "unmap": true, 00:23:27.110 "flush": false, 00:23:27.110 "reset": true, 00:23:27.110 "nvme_admin": false, 00:23:27.110 "nvme_io": false, 00:23:27.110 "nvme_io_md": false, 00:23:27.110 "write_zeroes": true, 00:23:27.110 "zcopy": false, 00:23:27.110 "get_zone_info": false, 00:23:27.110 "zone_management": false, 00:23:27.110 "zone_append": false, 00:23:27.110 "compare": false, 00:23:27.110 "compare_and_write": false, 00:23:27.110 "abort": false, 00:23:27.110 "seek_hole": true, 00:23:27.110 "seek_data": true, 00:23:27.110 "copy": false, 00:23:27.110 "nvme_iov_md": false 00:23:27.110 }, 00:23:27.110 "driver_specific": { 00:23:27.110 "lvol": { 00:23:27.111 "lvol_store_uuid": "9fdfcbc4-b18c-4483-8420-7e85d685f6ea", 00:23:27.111 "base_bdev": "nvme0n1", 00:23:27.111 "thin_provision": true, 00:23:27.111 "num_allocated_clusters": 0, 00:23:27.111 "snapshot": false, 00:23:27.111 "clone": false, 00:23:27.111 "esnap_clone": false 00:23:27.111 } 00:23:27.111 } 00:23:27.111 } 00:23:27.111 ]' 00:23:27.111 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:27.111 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:27.111 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:27.372 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:27.372 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:27.372 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:27.372 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:23:27.372 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:23:27.372 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:27.632 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:27.632 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:27.632 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:27.632 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:27.633 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:27.633 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:27.633 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:27.633 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:27.633 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:27.633 { 00:23:27.633 "name": "b2b1a948-a9bd-482a-9aa1-4485326c3450", 00:23:27.633 "aliases": [ 00:23:27.633 "lvs/nvme0n1p0" 00:23:27.633 ], 00:23:27.633 "product_name": "Logical Volume", 00:23:27.633 "block_size": 4096, 00:23:27.633 "num_blocks": 26476544, 00:23:27.633 "uuid": "b2b1a948-a9bd-482a-9aa1-4485326c3450", 00:23:27.633 "assigned_rate_limits": { 00:23:27.633 "rw_ios_per_sec": 0, 00:23:27.633 "rw_mbytes_per_sec": 0, 00:23:27.633 "r_mbytes_per_sec": 0, 00:23:27.633 "w_mbytes_per_sec": 0 00:23:27.633 }, 00:23:27.633 "claimed": false, 00:23:27.633 "zoned": false, 00:23:27.633 "supported_io_types": { 00:23:27.633 "read": true, 00:23:27.633 "write": true, 00:23:27.633 "unmap": true, 00:23:27.633 "flush": false, 00:23:27.633 "reset": true, 00:23:27.633 "nvme_admin": false, 00:23:27.633 "nvme_io": false, 00:23:27.633 "nvme_io_md": false, 00:23:27.633 "write_zeroes": true, 00:23:27.633 "zcopy": false, 00:23:27.633 "get_zone_info": false, 00:23:27.633 "zone_management": false, 00:23:27.633 "zone_append": false, 00:23:27.633 "compare": false, 00:23:27.633 "compare_and_write": false, 00:23:27.633 "abort": false, 00:23:27.633 "seek_hole": true, 00:23:27.633 "seek_data": true, 00:23:27.633 "copy": false, 00:23:27.633 "nvme_iov_md": false 00:23:27.633 }, 00:23:27.633 "driver_specific": { 00:23:27.633 "lvol": { 00:23:27.633 "lvol_store_uuid": "9fdfcbc4-b18c-4483-8420-7e85d685f6ea", 00:23:27.633 "base_bdev": "nvme0n1", 00:23:27.633 "thin_provision": true, 00:23:27.633 "num_allocated_clusters": 0, 00:23:27.633 "snapshot": false, 00:23:27.633 "clone": false, 00:23:27.633 "esnap_clone": false 00:23:27.633 } 00:23:27.633 } 00:23:27.633 } 00:23:27.633 ]' 00:23:27.633 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:27.633 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:27.633 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:27.894 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:27.894 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:27.894 23:44:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:27.894 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:23:27.894 23:44:15 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:27.894 23:44:16 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:27.894 23:44:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:27.894 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:27.894 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:27.894 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:27.894 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:27.894 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b2b1a948-a9bd-482a-9aa1-4485326c3450 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:28.154 { 00:23:28.154 "name": "b2b1a948-a9bd-482a-9aa1-4485326c3450", 00:23:28.154 "aliases": [ 00:23:28.154 "lvs/nvme0n1p0" 00:23:28.154 ], 00:23:28.154 "product_name": "Logical Volume", 00:23:28.154 "block_size": 4096, 00:23:28.154 "num_blocks": 26476544, 00:23:28.154 "uuid": "b2b1a948-a9bd-482a-9aa1-4485326c3450", 00:23:28.154 "assigned_rate_limits": { 00:23:28.154 "rw_ios_per_sec": 0, 00:23:28.154 "rw_mbytes_per_sec": 0, 00:23:28.154 "r_mbytes_per_sec": 0, 00:23:28.154 "w_mbytes_per_sec": 0 00:23:28.154 }, 00:23:28.154 "claimed": false, 00:23:28.154 "zoned": false, 00:23:28.154 "supported_io_types": { 00:23:28.154 "read": true, 00:23:28.154 "write": true, 00:23:28.154 "unmap": true, 00:23:28.154 "flush": false, 00:23:28.154 "reset": true, 00:23:28.154 "nvme_admin": false, 00:23:28.154 "nvme_io": false, 00:23:28.154 "nvme_io_md": false, 00:23:28.154 "write_zeroes": true, 00:23:28.154 "zcopy": false, 00:23:28.154 "get_zone_info": false, 00:23:28.154 "zone_management": false, 00:23:28.154 "zone_append": false, 00:23:28.154 "compare": false, 00:23:28.154 "compare_and_write": false, 00:23:28.154 "abort": false, 00:23:28.154 "seek_hole": true, 00:23:28.154 "seek_data": true, 00:23:28.154 "copy": false, 00:23:28.154 "nvme_iov_md": false 00:23:28.154 }, 00:23:28.154 "driver_specific": { 00:23:28.154 "lvol": { 00:23:28.154 "lvol_store_uuid": "9fdfcbc4-b18c-4483-8420-7e85d685f6ea", 00:23:28.154 "base_bdev": "nvme0n1", 00:23:28.154 "thin_provision": true, 00:23:28.154 "num_allocated_clusters": 0, 00:23:28.154 "snapshot": false, 00:23:28.154 "clone": false, 00:23:28.154 "esnap_clone": false 00:23:28.154 } 00:23:28.154 } 00:23:28.154 } 00:23:28.154 ]' 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b2b1a948-a9bd-482a-9aa1-4485326c3450 --l2p_dram_limit 10' 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:23:28.154 23:44:16 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b2b1a948-a9bd-482a-9aa1-4485326c3450 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:23:28.416 [2024-09-28 23:44:16.479967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.416 [2024-09-28 23:44:16.480006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:28.416 [2024-09-28 23:44:16.480018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:28.416 [2024-09-28 23:44:16.480025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.416 [2024-09-28 23:44:16.480069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.416 [2024-09-28 23:44:16.480076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:28.416 [2024-09-28 23:44:16.480084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:28.416 [2024-09-28 23:44:16.480090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.416 [2024-09-28 23:44:16.480110] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:28.416 [2024-09-28 23:44:16.480698] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:28.416 [2024-09-28 23:44:16.480715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.416 [2024-09-28 23:44:16.480721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:28.416 [2024-09-28 23:44:16.480729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:23:28.416 [2024-09-28 23:44:16.480737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.416 [2024-09-28 23:44:16.480762] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a13dbf6d-6874-41aa-9963-ba6a63aee207 00:23:28.416 [2024-09-28 23:44:16.481731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.416 [2024-09-28 23:44:16.481759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:28.416 [2024-09-28 23:44:16.481766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:28.416 [2024-09-28 23:44:16.481774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.416 [2024-09-28 23:44:16.486426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.416 [2024-09-28 23:44:16.486453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:28.416 [2024-09-28 23:44:16.486461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:23:28.416 [2024-09-28 23:44:16.486469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.416 [2024-09-28 23:44:16.486547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.416 [2024-09-28 23:44:16.486569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:28.416 [2024-09-28 23:44:16.486576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:28.416 [2024-09-28 23:44:16.486587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.416 [2024-09-28 23:44:16.486624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.416 [2024-09-28 23:44:16.486633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:28.416 [2024-09-28 23:44:16.486639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:28.416 [2024-09-28 23:44:16.486646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.417 [2024-09-28 23:44:16.486663] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:28.417 [2024-09-28 23:44:16.489546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.417 [2024-09-28 23:44:16.489570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:28.417 [2024-09-28 23:44:16.489580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.886 ms 00:23:28.417 [2024-09-28 23:44:16.489586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.417 [2024-09-28 23:44:16.489612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.417 [2024-09-28 23:44:16.489618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:28.417 [2024-09-28 23:44:16.489626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:28.417 [2024-09-28 23:44:16.489634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.417 [2024-09-28 23:44:16.489654] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:28.417 [2024-09-28 23:44:16.489756] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:28.417 [2024-09-28 23:44:16.489769] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:28.417 [2024-09-28 23:44:16.489777] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:28.417 [2024-09-28 23:44:16.489788] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:28.417 [2024-09-28 23:44:16.489795] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:28.417 [2024-09-28 23:44:16.489802] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:28.417 [2024-09-28 23:44:16.489808] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:28.417 [2024-09-28 23:44:16.489815] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:28.417 [2024-09-28 23:44:16.489821] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:28.417 [2024-09-28 23:44:16.489828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.417 [2024-09-28 23:44:16.489838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:28.417 [2024-09-28 23:44:16.489846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:23:28.417 [2024-09-28 23:44:16.489852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.417 [2024-09-28 23:44:16.489917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.417 [2024-09-28 23:44:16.489925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:28.417 [2024-09-28 23:44:16.489932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:28.417 [2024-09-28 23:44:16.489938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.417 [2024-09-28 23:44:16.490014] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:28.417 [2024-09-28 23:44:16.490021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:28.417 [2024-09-28 23:44:16.490029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.417 [2024-09-28 23:44:16.490035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:28.417 [2024-09-28 23:44:16.490047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:28.417 [2024-09-28 23:44:16.490058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:28.417 [2024-09-28 23:44:16.490065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.417 [2024-09-28 23:44:16.490076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:28.417 [2024-09-28 23:44:16.490082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:28.417 [2024-09-28 23:44:16.490088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.417 [2024-09-28 23:44:16.490093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:28.417 [2024-09-28 23:44:16.490100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:28.417 [2024-09-28 23:44:16.490105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:28.417 [2024-09-28 23:44:16.490117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:28.417 [2024-09-28 23:44:16.490124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:28.417 [2024-09-28 23:44:16.490136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.417 [2024-09-28 23:44:16.490147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:28.417 [2024-09-28 23:44:16.490153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.417 [2024-09-28 23:44:16.490165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:28.417 [2024-09-28 23:44:16.490171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.417 [2024-09-28 23:44:16.490183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:28.417 [2024-09-28 23:44:16.490188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.417 [2024-09-28 23:44:16.490199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:28.417 [2024-09-28 23:44:16.490206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.417 [2024-09-28 23:44:16.490217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:28.417 [2024-09-28 23:44:16.490222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:28.417 [2024-09-28 23:44:16.490229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.417 [2024-09-28 23:44:16.490234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:28.417 [2024-09-28 23:44:16.490240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:28.417 [2024-09-28 23:44:16.490245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:28.417 [2024-09-28 23:44:16.490256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:28.417 [2024-09-28 23:44:16.490262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490267] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:28.417 [2024-09-28 23:44:16.490274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:28.417 [2024-09-28 23:44:16.490281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.417 [2024-09-28 23:44:16.490288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.417 [2024-09-28 23:44:16.490294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:28.417 [2024-09-28 23:44:16.490301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:28.417 [2024-09-28 23:44:16.490307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:28.417 [2024-09-28 23:44:16.490314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:28.417 [2024-09-28 23:44:16.490318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:28.417 [2024-09-28 23:44:16.490325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:28.417 [2024-09-28 23:44:16.490333] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:28.417 [2024-09-28 23:44:16.490342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.417 [2024-09-28 23:44:16.490351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:28.417 [2024-09-28 23:44:16.490358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:28.417 [2024-09-28 23:44:16.490364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:28.417 [2024-09-28 23:44:16.490370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:28.417 [2024-09-28 23:44:16.490376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:28.417 [2024-09-28 23:44:16.490383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:28.417 [2024-09-28 23:44:16.490388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:28.417 [2024-09-28 23:44:16.490394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:28.417 [2024-09-28 23:44:16.490400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:28.417 [2024-09-28 23:44:16.490408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:28.417 [2024-09-28 23:44:16.490413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:28.417 [2024-09-28 23:44:16.490420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:28.417 [2024-09-28 23:44:16.490425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:28.417 [2024-09-28 23:44:16.490433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:28.417 [2024-09-28 23:44:16.490438] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:28.417 [2024-09-28 23:44:16.490446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.417 [2024-09-28 23:44:16.490452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:28.418 [2024-09-28 23:44:16.490459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:28.418 [2024-09-28 23:44:16.490464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:28.418 [2024-09-28 23:44:16.490471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:28.418 [2024-09-28 23:44:16.490477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.418 [2024-09-28 23:44:16.490484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:28.418 [2024-09-28 23:44:16.490489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:23:28.418 [2024-09-28 23:44:16.490496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.418 [2024-09-28 23:44:16.490729] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:28.418 [2024-09-28 23:44:16.490773] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:32.626 [2024-09-28 23:44:20.316054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.316315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:32.626 [2024-09-28 23:44:20.316405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3825.312 ms 00:23:32.626 [2024-09-28 23:44:20.316436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.626 [2024-09-28 23:44:20.348729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.348947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:32.626 [2024-09-28 23:44:20.349022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.937 ms 00:23:32.626 [2024-09-28 23:44:20.349051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.626 [2024-09-28 23:44:20.349242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.349450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:32.626 [2024-09-28 23:44:20.349478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:32.626 [2024-09-28 23:44:20.349545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.626 [2024-09-28 23:44:20.393519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.393786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:32.626 [2024-09-28 23:44:20.393819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.887 ms 00:23:32.626 [2024-09-28 23:44:20.393839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.626 [2024-09-28 23:44:20.393898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.393914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.626 [2024-09-28 23:44:20.393927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:32.626 [2024-09-28 23:44:20.393951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.626 [2024-09-28 23:44:20.394636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.394669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.626 [2024-09-28 23:44:20.394683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:23:32.626 [2024-09-28 23:44:20.394701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.626 [2024-09-28 23:44:20.394859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.394875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.626 [2024-09-28 23:44:20.394886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:23:32.626 [2024-09-28 23:44:20.394920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.626 [2024-09-28 23:44:20.413432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.413638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.626 [2024-09-28 23:44:20.413660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.485 ms 00:23:32.626 [2024-09-28 23:44:20.413670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.626 [2024-09-28 23:44:20.427065] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:32.626 [2024-09-28 23:44:20.430935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.430976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:32.626 [2024-09-28 23:44:20.430993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.171 ms 00:23:32.626 [2024-09-28 23:44:20.431001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.626 [2024-09-28 23:44:20.539603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.539662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:32.626 [2024-09-28 23:44:20.539683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.564 ms 00:23:32.626 [2024-09-28 23:44:20.539691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.626 [2024-09-28 23:44:20.539906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.626 [2024-09-28 23:44:20.539919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:32.626 [2024-09-28 23:44:20.539934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:23:32.627 [2024-09-28 23:44:20.539942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.627 [2024-09-28 23:44:20.566357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.627 [2024-09-28 23:44:20.566424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:32.627 [2024-09-28 23:44:20.566441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.337 ms 00:23:32.627 [2024-09-28 23:44:20.566450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.627 [2024-09-28 23:44:20.591580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.627 [2024-09-28 23:44:20.591755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:32.627 [2024-09-28 23:44:20.591782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.073 ms 00:23:32.627 [2024-09-28 23:44:20.591790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.627 [2024-09-28 23:44:20.592397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.627 [2024-09-28 23:44:20.592417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:32.627 [2024-09-28 23:44:20.592429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:23:32.627 [2024-09-28 23:44:20.592437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.627 [2024-09-28 23:44:20.680733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.627 [2024-09-28 23:44:20.680784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:32.627 [2024-09-28 23:44:20.680804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.233 ms 00:23:32.627 [2024-09-28 23:44:20.680816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.627 [2024-09-28 23:44:20.708748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.627 [2024-09-28 23:44:20.708797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:32.627 [2024-09-28 23:44:20.708814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.842 ms 00:23:32.627 [2024-09-28 23:44:20.708822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.627 [2024-09-28 23:44:20.734837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.627 [2024-09-28 23:44:20.735028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:32.627 [2024-09-28 23:44:20.735054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.959 ms 00:23:32.627 [2024-09-28 23:44:20.735061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.627 [2024-09-28 23:44:20.761956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.627 [2024-09-28 23:44:20.762132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:32.627 [2024-09-28 23:44:20.762159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.770 ms 00:23:32.627 [2024-09-28 23:44:20.762167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.627 [2024-09-28 23:44:20.762217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.627 [2024-09-28 23:44:20.762226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:32.627 [2024-09-28 23:44:20.762244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:32.627 [2024-09-28 23:44:20.762252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.627 [2024-09-28 23:44:20.762362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.627 [2024-09-28 23:44:20.762373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:32.627 [2024-09-28 23:44:20.762383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:32.627 [2024-09-28 23:44:20.762391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.627 [2024-09-28 23:44:20.763598] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4283.083 ms, result 0 00:23:32.627 { 00:23:32.627 "name": "ftl0", 00:23:32.627 "uuid": "a13dbf6d-6874-41aa-9963-ba6a63aee207" 00:23:32.627 } 00:23:32.627 23:44:20 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:32.627 23:44:20 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:32.888 23:44:21 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:23:32.888 23:44:21 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:33.148 [2024-09-28 23:44:21.210951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.148 [2024-09-28 23:44:21.211009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:33.148 [2024-09-28 23:44:21.211022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:33.148 [2024-09-28 23:44:21.211034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.148 [2024-09-28 23:44:21.211058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:33.148 [2024-09-28 23:44:21.214088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.148 [2024-09-28 23:44:21.214129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:33.148 [2024-09-28 23:44:21.214154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:23:33.148 [2024-09-28 23:44:21.214162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.148 [2024-09-28 23:44:21.214455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.148 [2024-09-28 23:44:21.214467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:33.148 [2024-09-28 23:44:21.214478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:23:33.148 [2024-09-28 23:44:21.214486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.148 [2024-09-28 23:44:21.217755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.148 [2024-09-28 23:44:21.217779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:33.148 [2024-09-28 23:44:21.217790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.251 ms 00:23:33.148 [2024-09-28 23:44:21.217801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.148 [2024-09-28 23:44:21.224397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.148 [2024-09-28 23:44:21.224439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:33.148 [2024-09-28 23:44:21.224454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.572 ms 00:23:33.148 [2024-09-28 23:44:21.224463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.148 [2024-09-28 23:44:21.250741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.149 [2024-09-28 23:44:21.250790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:33.149 [2024-09-28 23:44:21.250806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.158 ms 00:23:33.149 [2024-09-28 23:44:21.250813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.149 [2024-09-28 23:44:21.268366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.149 [2024-09-28 23:44:21.268595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:33.149 [2024-09-28 23:44:21.268621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.493 ms 00:23:33.149 [2024-09-28 23:44:21.268630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.149 [2024-09-28 23:44:21.268831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.149 [2024-09-28 23:44:21.268846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:33.149 [2024-09-28 23:44:21.268858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:23:33.149 [2024-09-28 23:44:21.268865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.149 [2024-09-28 23:44:21.295118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.149 [2024-09-28 23:44:21.295177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:33.149 [2024-09-28 23:44:21.295192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.228 ms 00:23:33.149 [2024-09-28 23:44:21.295200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.411 [2024-09-28 23:44:21.320287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.411 [2024-09-28 23:44:21.320332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:33.411 [2024-09-28 23:44:21.320346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.032 ms 00:23:33.411 [2024-09-28 23:44:21.320353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.411 [2024-09-28 23:44:21.345095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.411 [2024-09-28 23:44:21.345140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:33.411 [2024-09-28 23:44:21.345154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.685 ms 00:23:33.411 [2024-09-28 23:44:21.345161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.411 [2024-09-28 23:44:21.369976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.411 [2024-09-28 23:44:21.370020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:33.411 [2024-09-28 23:44:21.370034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.701 ms 00:23:33.411 [2024-09-28 23:44:21.370041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.411 [2024-09-28 23:44:21.370091] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:33.411 [2024-09-28 23:44:21.370108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:33.411 [2024-09-28 23:44:21.370643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.370993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.371002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.371010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.371021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.371029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.371039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.371046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.371056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.371063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.371075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:33.412 [2024-09-28 23:44:21.371091] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:33.412 [2024-09-28 23:44:21.371104] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a13dbf6d-6874-41aa-9963-ba6a63aee207 00:23:33.412 [2024-09-28 23:44:21.371113] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:33.412 [2024-09-28 23:44:21.371124] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:33.412 [2024-09-28 23:44:21.371132] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:33.412 [2024-09-28 23:44:21.371142] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:33.412 [2024-09-28 23:44:21.371150] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:33.412 [2024-09-28 23:44:21.371159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:33.412 [2024-09-28 23:44:21.371169] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:33.412 [2024-09-28 23:44:21.371178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:33.412 [2024-09-28 23:44:21.371184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:33.412 [2024-09-28 23:44:21.371193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.412 [2024-09-28 23:44:21.371201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:33.412 [2024-09-28 23:44:21.371211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:23:33.412 [2024-09-28 23:44:21.371219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.412 [2024-09-28 23:44:21.384846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.412 [2024-09-28 23:44:21.384885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:33.412 [2024-09-28 23:44:21.384900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.580 ms 00:23:33.412 [2024-09-28 23:44:21.384908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.412 [2024-09-28 23:44:21.385302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.412 [2024-09-28 23:44:21.385312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:33.412 [2024-09-28 23:44:21.385323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:23:33.412 [2024-09-28 23:44:21.385332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.412 [2024-09-28 23:44:21.426731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.412 [2024-09-28 23:44:21.426778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.412 [2024-09-28 23:44:21.426793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.412 [2024-09-28 23:44:21.426804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.412 [2024-09-28 23:44:21.426873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.412 [2024-09-28 23:44:21.426882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.412 [2024-09-28 23:44:21.426892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.412 [2024-09-28 23:44:21.426911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.412 [2024-09-28 23:44:21.426994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.412 [2024-09-28 23:44:21.427005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.412 [2024-09-28 23:44:21.427015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.412 [2024-09-28 23:44:21.427024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.412 [2024-09-28 23:44:21.427050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.412 [2024-09-28 23:44:21.427058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.412 [2024-09-28 23:44:21.427068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.412 [2024-09-28 23:44:21.427076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.412 [2024-09-28 23:44:21.513102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.412 [2024-09-28 23:44:21.513322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.412 [2024-09-28 23:44:21.513351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.412 [2024-09-28 23:44:21.513360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.674 [2024-09-28 23:44:21.583493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.674 [2024-09-28 23:44:21.583567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.674 [2024-09-28 23:44:21.583583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.674 [2024-09-28 23:44:21.583592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.674 [2024-09-28 23:44:21.583725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.674 [2024-09-28 23:44:21.583737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.674 [2024-09-28 23:44:21.583748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.674 [2024-09-28 23:44:21.583757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.674 [2024-09-28 23:44:21.583813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.674 [2024-09-28 23:44:21.583826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.674 [2024-09-28 23:44:21.583837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.674 [2024-09-28 23:44:21.583845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.674 [2024-09-28 23:44:21.583953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.674 [2024-09-28 23:44:21.583964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.674 [2024-09-28 23:44:21.583975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.674 [2024-09-28 23:44:21.583983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.674 [2024-09-28 23:44:21.584019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.674 [2024-09-28 23:44:21.584029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.674 [2024-09-28 23:44:21.584042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.674 [2024-09-28 23:44:21.584051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.674 [2024-09-28 23:44:21.584095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.674 [2024-09-28 23:44:21.584109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.674 [2024-09-28 23:44:21.584120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.674 [2024-09-28 23:44:21.584129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.674 [2024-09-28 23:44:21.584183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.674 [2024-09-28 23:44:21.584197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.674 [2024-09-28 23:44:21.584208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.674 [2024-09-28 23:44:21.584217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.674 [2024-09-28 23:44:21.584367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.372 ms, result 0 00:23:33.674 true 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 78800 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78800 ']' 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78800 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78800 00:23:33.674 killing process with pid 78800 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78800' 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 78800 00:23:33.674 23:44:21 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 78800 00:23:41.815 23:44:29 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:45.115 262144+0 records in 00:23:45.115 262144+0 records out 00:23:45.115 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.62454 s, 296 MB/s 00:23:45.115 23:44:32 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:46.501 23:44:34 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:46.501 [2024-09-28 23:44:34.391428] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:46.501 [2024-09-28 23:44:34.391575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79020 ] 00:23:46.501 [2024-09-28 23:44:34.538316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.762 [2024-09-28 23:44:34.725373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.025 [2024-09-28 23:44:34.991646] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:47.025 [2024-09-28 23:44:34.991725] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:47.025 [2024-09-28 23:44:35.154222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.154465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:47.025 [2024-09-28 23:44:35.154490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:47.025 [2024-09-28 23:44:35.154526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.154596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.154609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:47.025 [2024-09-28 23:44:35.154619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:47.025 [2024-09-28 23:44:35.154627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.154650] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:47.025 [2024-09-28 23:44:35.155404] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:47.025 [2024-09-28 23:44:35.155423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.155431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:47.025 [2024-09-28 23:44:35.155440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:23:47.025 [2024-09-28 23:44:35.155449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.157087] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:47.025 [2024-09-28 23:44:35.171381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.171589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:47.025 [2024-09-28 23:44:35.171772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.297 ms 00:23:47.025 [2024-09-28 23:44:35.171815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.171959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.171987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:47.025 [2024-09-28 23:44:35.171998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:47.025 [2024-09-28 23:44:35.172007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.179975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.180021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:47.025 [2024-09-28 23:44:35.180033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.892 ms 00:23:47.025 [2024-09-28 23:44:35.180041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.180121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.180131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:47.025 [2024-09-28 23:44:35.180140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:47.025 [2024-09-28 23:44:35.180148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.180193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.180204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:47.025 [2024-09-28 23:44:35.180213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:47.025 [2024-09-28 23:44:35.180221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.180244] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:47.025 [2024-09-28 23:44:35.184476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.184669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:47.025 [2024-09-28 23:44:35.184689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.238 ms 00:23:47.025 [2024-09-28 23:44:35.184697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.184733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.184742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:47.025 [2024-09-28 23:44:35.184751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:47.025 [2024-09-28 23:44:35.184759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.184815] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:47.025 [2024-09-28 23:44:35.184838] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:47.025 [2024-09-28 23:44:35.184875] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:47.025 [2024-09-28 23:44:35.184891] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:47.025 [2024-09-28 23:44:35.184996] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:47.025 [2024-09-28 23:44:35.185007] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:47.025 [2024-09-28 23:44:35.185019] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:47.025 [2024-09-28 23:44:35.185032] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:47.025 [2024-09-28 23:44:35.185043] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:47.025 [2024-09-28 23:44:35.185051] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:47.025 [2024-09-28 23:44:35.185058] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:47.025 [2024-09-28 23:44:35.185066] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:47.025 [2024-09-28 23:44:35.185074] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:47.025 [2024-09-28 23:44:35.185082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.185090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:47.025 [2024-09-28 23:44:35.185099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:23:47.025 [2024-09-28 23:44:35.185106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.185188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.025 [2024-09-28 23:44:35.185199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:47.025 [2024-09-28 23:44:35.185207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:47.025 [2024-09-28 23:44:35.185214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.025 [2024-09-28 23:44:35.185319] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:47.025 [2024-09-28 23:44:35.185330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:47.025 [2024-09-28 23:44:35.185339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:47.025 [2024-09-28 23:44:35.185348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.025 [2024-09-28 23:44:35.185356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:47.025 [2024-09-28 23:44:35.185363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:47.025 [2024-09-28 23:44:35.185370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:47.025 [2024-09-28 23:44:35.185377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:47.025 [2024-09-28 23:44:35.185386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:47.025 [2024-09-28 23:44:35.185394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:47.025 [2024-09-28 23:44:35.185401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:47.025 [2024-09-28 23:44:35.185407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:47.025 [2024-09-28 23:44:35.185414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:47.025 [2024-09-28 23:44:35.185428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:47.025 [2024-09-28 23:44:35.185436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:47.025 [2024-09-28 23:44:35.185443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.025 [2024-09-28 23:44:35.185450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:47.025 [2024-09-28 23:44:35.185457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:47.026 [2024-09-28 23:44:35.185464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.026 [2024-09-28 23:44:35.185471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:47.026 [2024-09-28 23:44:35.185478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:47.026 [2024-09-28 23:44:35.185484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.026 [2024-09-28 23:44:35.185491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:47.026 [2024-09-28 23:44:35.185499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:47.026 [2024-09-28 23:44:35.185526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.026 [2024-09-28 23:44:35.185533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:47.026 [2024-09-28 23:44:35.185540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:47.026 [2024-09-28 23:44:35.185547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.026 [2024-09-28 23:44:35.185554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:47.026 [2024-09-28 23:44:35.185562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:47.026 [2024-09-28 23:44:35.185569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.026 [2024-09-28 23:44:35.185577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:47.026 [2024-09-28 23:44:35.185583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:47.026 [2024-09-28 23:44:35.185590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:47.026 [2024-09-28 23:44:35.185597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:47.026 [2024-09-28 23:44:35.185603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:47.026 [2024-09-28 23:44:35.185610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:47.026 [2024-09-28 23:44:35.185617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:47.026 [2024-09-28 23:44:35.185624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:47.026 [2024-09-28 23:44:35.185631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.026 [2024-09-28 23:44:35.185639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:47.026 [2024-09-28 23:44:35.185646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:47.026 [2024-09-28 23:44:35.185652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.026 [2024-09-28 23:44:35.185660] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:47.026 [2024-09-28 23:44:35.185668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:47.026 [2024-09-28 23:44:35.185679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:47.026 [2024-09-28 23:44:35.185689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.026 [2024-09-28 23:44:35.185697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:47.026 [2024-09-28 23:44:35.185704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:47.026 [2024-09-28 23:44:35.185711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:47.026 [2024-09-28 23:44:35.185718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:47.026 [2024-09-28 23:44:35.185725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:47.026 [2024-09-28 23:44:35.185732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:47.026 [2024-09-28 23:44:35.185741] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:47.026 [2024-09-28 23:44:35.185750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:47.026 [2024-09-28 23:44:35.185758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:47.026 [2024-09-28 23:44:35.185766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:47.026 [2024-09-28 23:44:35.185773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:47.026 [2024-09-28 23:44:35.185781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:47.026 [2024-09-28 23:44:35.185788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:47.026 [2024-09-28 23:44:35.185795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:47.026 [2024-09-28 23:44:35.185803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:47.026 [2024-09-28 23:44:35.185810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:47.026 [2024-09-28 23:44:35.185817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:47.026 [2024-09-28 23:44:35.185825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:47.026 [2024-09-28 23:44:35.185832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:47.026 [2024-09-28 23:44:35.185839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:47.026 [2024-09-28 23:44:35.185846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:47.026 [2024-09-28 23:44:35.185854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:47.026 [2024-09-28 23:44:35.185861] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:47.026 [2024-09-28 23:44:35.185869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:47.026 [2024-09-28 23:44:35.185878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:47.026 [2024-09-28 23:44:35.185886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:47.026 [2024-09-28 23:44:35.185893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:47.026 [2024-09-28 23:44:35.185901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:47.026 [2024-09-28 23:44:35.185908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.026 [2024-09-28 23:44:35.185915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:47.026 [2024-09-28 23:44:35.185925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:23:47.026 [2024-09-28 23:44:35.185933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.287 [2024-09-28 23:44:35.227532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.287 [2024-09-28 23:44:35.227724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:47.287 [2024-09-28 23:44:35.227791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.550 ms 00:23:47.287 [2024-09-28 23:44:35.227817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.287 [2024-09-28 23:44:35.227937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.287 [2024-09-28 23:44:35.227962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:47.287 [2024-09-28 23:44:35.227982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:47.287 [2024-09-28 23:44:35.228001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.287 [2024-09-28 23:44:35.262729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.287 [2024-09-28 23:44:35.262897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:47.287 [2024-09-28 23:44:35.263060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.652 ms 00:23:47.287 [2024-09-28 23:44:35.263090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.287 [2024-09-28 23:44:35.263144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.287 [2024-09-28 23:44:35.263166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:47.287 [2024-09-28 23:44:35.263186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:47.287 [2024-09-28 23:44:35.263206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.287 [2024-09-28 23:44:35.263814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.287 [2024-09-28 23:44:35.263973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:47.287 [2024-09-28 23:44:35.264032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:23:47.287 [2024-09-28 23:44:35.264063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.287 [2024-09-28 23:44:35.264232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.287 [2024-09-28 23:44:35.264257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:47.287 [2024-09-28 23:44:35.264278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:23:47.287 [2024-09-28 23:44:35.264297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.287 [2024-09-28 23:44:35.279187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.287 [2024-09-28 23:44:35.279333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:47.287 [2024-09-28 23:44:35.279387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.859 ms 00:23:47.288 [2024-09-28 23:44:35.279412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.293674] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:47.288 [2024-09-28 23:44:35.293859] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:47.288 [2024-09-28 23:44:35.293926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.293947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:47.288 [2024-09-28 23:44:35.293967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.362 ms 00:23:47.288 [2024-09-28 23:44:35.293985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.319362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.319542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:47.288 [2024-09-28 23:44:35.319605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.323 ms 00:23:47.288 [2024-09-28 23:44:35.319630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.331904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.332060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:47.288 [2024-09-28 23:44:35.332114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.220 ms 00:23:47.288 [2024-09-28 23:44:35.332138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.345313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.345501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:47.288 [2024-09-28 23:44:35.345582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.710 ms 00:23:47.288 [2024-09-28 23:44:35.345606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.346360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.346499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:47.288 [2024-09-28 23:44:35.346584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:23:47.288 [2024-09-28 23:44:35.346652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.412580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.412783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:47.288 [2024-09-28 23:44:35.412845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.886 ms 00:23:47.288 [2024-09-28 23:44:35.412872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.424172] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:47.288 [2024-09-28 23:44:35.427591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.427735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:47.288 [2024-09-28 23:44:35.427792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.662 ms 00:23:47.288 [2024-09-28 23:44:35.427816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.427924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.427953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:47.288 [2024-09-28 23:44:35.427974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:47.288 [2024-09-28 23:44:35.427994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.428154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.428187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:47.288 [2024-09-28 23:44:35.428210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:47.288 [2024-09-28 23:44:35.428229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.428266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.428295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:47.288 [2024-09-28 23:44:35.428361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:47.288 [2024-09-28 23:44:35.428387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.288 [2024-09-28 23:44:35.428439] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:47.288 [2024-09-28 23:44:35.428568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.288 [2024-09-28 23:44:35.429148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:47.288 [2024-09-28 23:44:35.429208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:23:47.288 [2024-09-28 23:44:35.429241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.549 [2024-09-28 23:44:35.455273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.549 [2024-09-28 23:44:35.455463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:47.549 [2024-09-28 23:44:35.455547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.965 ms 00:23:47.549 [2024-09-28 23:44:35.455563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.549 [2024-09-28 23:44:35.455744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.549 [2024-09-28 23:44:35.455770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:47.549 [2024-09-28 23:44:35.455782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:47.549 [2024-09-28 23:44:35.455791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.549 [2024-09-28 23:44:35.457067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.301 ms, result 0 00:24:42.385  Copying: 16/1024 [MB] (16 MBps) Copying: 34/1024 [MB] (17 MBps) Copying: 77/1024 [MB] (43 MBps) Copying: 109/1024 [MB] (32 MBps) Copying: 127/1024 [MB] (17 MBps) Copying: 144/1024 [MB] (17 MBps) Copying: 164/1024 [MB] (20 MBps) Copying: 184/1024 [MB] (20 MBps) Copying: 196/1024 [MB] (12 MBps) Copying: 213/1024 [MB] (16 MBps) Copying: 227/1024 [MB] (14 MBps) Copying: 247/1024 [MB] (20 MBps) Copying: 259/1024 [MB] (11 MBps) Copying: 270/1024 [MB] (10 MBps) Copying: 280/1024 [MB] (10 MBps) Copying: 290/1024 [MB] (10 MBps) Copying: 304/1024 [MB] (14 MBps) Copying: 316/1024 [MB] (11 MBps) Copying: 337/1024 [MB] (21 MBps) Copying: 352/1024 [MB] (14 MBps) Copying: 372/1024 [MB] (19 MBps) Copying: 392/1024 [MB] (20 MBps) Copying: 411/1024 [MB] (18 MBps) Copying: 432/1024 [MB] (20 MBps) Copying: 451/1024 [MB] (19 MBps) Copying: 467/1024 [MB] (15 MBps) Copying: 480/1024 [MB] (13 MBps) Copying: 496/1024 [MB] (15 MBps) Copying: 509/1024 [MB] (12 MBps) Copying: 527/1024 [MB] (18 MBps) Copying: 541/1024 [MB] (14 MBps) Copying: 562/1024 [MB] (21 MBps) Copying: 579/1024 [MB] (17 MBps) Copying: 591/1024 [MB] (11 MBps) Copying: 618/1024 [MB] (27 MBps) Copying: 644/1024 [MB] (26 MBps) Copying: 664/1024 [MB] (19 MBps) Copying: 675/1024 [MB] (10 MBps) Copying: 698/1024 [MB] (23 MBps) Copying: 731/1024 [MB] (32 MBps) Copying: 760/1024 [MB] (29 MBps) Copying: 779/1024 [MB] (19 MBps) Copying: 798/1024 [MB] (18 MBps) Copying: 816/1024 [MB] (18 MBps) Copying: 830/1024 [MB] (13 MBps) Copying: 842/1024 [MB] (11 MBps) Copying: 857/1024 [MB] (14 MBps) Copying: 874/1024 [MB] (17 MBps) Copying: 891/1024 [MB] (16 MBps) Copying: 906/1024 [MB] (14 MBps) Copying: 930/1024 [MB] (24 MBps) Copying: 949/1024 [MB] (18 MBps) Copying: 960/1024 [MB] (11 MBps) Copying: 981/1024 [MB] (20 MBps) Copying: 1021/1024 [MB] (40 MBps) Copying: 1024/1024 [MB] (average 18 MBps)[2024-09-28 23:45:30.522258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.385 [2024-09-28 23:45:30.522298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:42.385 [2024-09-28 23:45:30.522310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:42.385 [2024-09-28 23:45:30.522316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.385 [2024-09-28 23:45:30.522335] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:42.385 [2024-09-28 23:45:30.524513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.385 [2024-09-28 23:45:30.524540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:42.385 [2024-09-28 23:45:30.524548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:24:42.385 [2024-09-28 23:45:30.524555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.385 [2024-09-28 23:45:30.525810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.385 [2024-09-28 23:45:30.525918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:42.385 [2024-09-28 23:45:30.525931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:24:42.385 [2024-09-28 23:45:30.525938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.385 [2024-09-28 23:45:30.525964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.385 [2024-09-28 23:45:30.525970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:24:42.385 [2024-09-28 23:45:30.525977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:42.385 [2024-09-28 23:45:30.525987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.385 [2024-09-28 23:45:30.526025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.385 [2024-09-28 23:45:30.526032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:24:42.385 [2024-09-28 23:45:30.526038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:42.385 [2024-09-28 23:45:30.526044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.385 [2024-09-28 23:45:30.526054] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:42.385 [2024-09-28 23:45:30.526063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:42.385 [2024-09-28 23:45:30.526072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:42.386 [2024-09-28 23:45:30.526602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:42.387 [2024-09-28 23:45:30.526611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:42.387 [2024-09-28 23:45:30.526617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:42.387 [2024-09-28 23:45:30.526622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:42.387 [2024-09-28 23:45:30.526628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:42.387 [2024-09-28 23:45:30.526634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:42.387 [2024-09-28 23:45:30.526640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:42.387 [2024-09-28 23:45:30.526646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:42.387 [2024-09-28 23:45:30.526651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:42.387 [2024-09-28 23:45:30.526657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:42.387 [2024-09-28 23:45:30.526669] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:42.387 [2024-09-28 23:45:30.526675] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a13dbf6d-6874-41aa-9963-ba6a63aee207 00:24:42.387 [2024-09-28 23:45:30.526681] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:42.387 [2024-09-28 23:45:30.526687] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:24:42.387 [2024-09-28 23:45:30.526692] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:42.387 [2024-09-28 23:45:30.526698] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:42.387 [2024-09-28 23:45:30.526703] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:42.387 [2024-09-28 23:45:30.526709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:42.387 [2024-09-28 23:45:30.526714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:42.387 [2024-09-28 23:45:30.526719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:42.387 [2024-09-28 23:45:30.526726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:42.387 [2024-09-28 23:45:30.526731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.387 [2024-09-28 23:45:30.526737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:42.387 [2024-09-28 23:45:30.526743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:24:42.387 [2024-09-28 23:45:30.526750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.536266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.648 [2024-09-28 23:45:30.536292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:42.648 [2024-09-28 23:45:30.536300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.506 ms 00:24:42.648 [2024-09-28 23:45:30.536306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.536587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.648 [2024-09-28 23:45:30.536595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:42.648 [2024-09-28 23:45:30.536606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:24:42.648 [2024-09-28 23:45:30.536611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.558592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.558618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:42.648 [2024-09-28 23:45:30.558625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.558631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.558672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.558679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:42.648 [2024-09-28 23:45:30.558687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.558693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.558726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.558733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:42.648 [2024-09-28 23:45:30.558738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.558746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.558757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.558763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:42.648 [2024-09-28 23:45:30.558769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.558777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.624633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.624670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:42.648 [2024-09-28 23:45:30.624680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.624686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.672632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.672663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:42.648 [2024-09-28 23:45:30.672672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.672682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.672733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.672741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:42.648 [2024-09-28 23:45:30.672748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.672753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.672779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.672786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:42.648 [2024-09-28 23:45:30.672792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.672798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.672854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.672860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:42.648 [2024-09-28 23:45:30.672866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.672872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.672890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.672896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:42.648 [2024-09-28 23:45:30.672903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.672909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.672941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.672947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:42.648 [2024-09-28 23:45:30.672953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.672959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.672990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.648 [2024-09-28 23:45:30.672997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:42.648 [2024-09-28 23:45:30.673003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.648 [2024-09-28 23:45:30.673008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.648 [2024-09-28 23:45:30.673096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 150.818 ms, result 0 00:24:44.030 00:24:44.030 00:24:44.031 23:45:31 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:44.031 [2024-09-28 23:45:31.992266] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:44.031 [2024-09-28 23:45:31.992385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79604 ] 00:24:44.031 [2024-09-28 23:45:32.141803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.289 [2024-09-28 23:45:32.286923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.550 [2024-09-28 23:45:32.496161] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:44.550 [2024-09-28 23:45:32.496214] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:44.550 [2024-09-28 23:45:32.643769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.643806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:44.550 [2024-09-28 23:45:32.643816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:44.550 [2024-09-28 23:45:32.643827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.643859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.643867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:44.550 [2024-09-28 23:45:32.643874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:44.550 [2024-09-28 23:45:32.643879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.643892] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:44.550 [2024-09-28 23:45:32.644388] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:44.550 [2024-09-28 23:45:32.644399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.644405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:44.550 [2024-09-28 23:45:32.644411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:24:44.550 [2024-09-28 23:45:32.644416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.644628] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:24:44.550 [2024-09-28 23:45:32.644646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.644652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:44.550 [2024-09-28 23:45:32.644659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:44.550 [2024-09-28 23:45:32.644664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.644716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.644724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:44.550 [2024-09-28 23:45:32.644730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:44.550 [2024-09-28 23:45:32.644736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.644934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.644947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:44.550 [2024-09-28 23:45:32.644954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:24:44.550 [2024-09-28 23:45:32.644959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.645008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.645018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:44.550 [2024-09-28 23:45:32.645026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:44.550 [2024-09-28 23:45:32.645031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.645046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.645052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:44.550 [2024-09-28 23:45:32.645058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:44.550 [2024-09-28 23:45:32.645063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.645075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:44.550 [2024-09-28 23:45:32.647904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.647930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:44.550 [2024-09-28 23:45:32.647937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.831 ms 00:24:44.550 [2024-09-28 23:45:32.647942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.647967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.647973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:44.550 [2024-09-28 23:45:32.647982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:44.550 [2024-09-28 23:45:32.647987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.648019] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:44.550 [2024-09-28 23:45:32.648035] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:44.550 [2024-09-28 23:45:32.648061] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:44.550 [2024-09-28 23:45:32.648072] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:44.550 [2024-09-28 23:45:32.648151] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:44.550 [2024-09-28 23:45:32.648161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:44.550 [2024-09-28 23:45:32.648169] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:44.550 [2024-09-28 23:45:32.648176] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:44.550 [2024-09-28 23:45:32.648183] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:44.550 [2024-09-28 23:45:32.648189] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:44.550 [2024-09-28 23:45:32.648195] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:44.550 [2024-09-28 23:45:32.648201] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:44.550 [2024-09-28 23:45:32.648206] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:44.550 [2024-09-28 23:45:32.648212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.648217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:44.550 [2024-09-28 23:45:32.648223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:24:44.550 [2024-09-28 23:45:32.648230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.648295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.550 [2024-09-28 23:45:32.648301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:44.550 [2024-09-28 23:45:32.648307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:44.550 [2024-09-28 23:45:32.648312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.550 [2024-09-28 23:45:32.648386] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:44.550 [2024-09-28 23:45:32.648394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:44.550 [2024-09-28 23:45:32.648401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:44.550 [2024-09-28 23:45:32.648406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.550 [2024-09-28 23:45:32.648413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:44.550 [2024-09-28 23:45:32.648419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:44.550 [2024-09-28 23:45:32.648424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:44.550 [2024-09-28 23:45:32.648430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:44.550 [2024-09-28 23:45:32.648436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:44.550 [2024-09-28 23:45:32.648441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:44.550 [2024-09-28 23:45:32.648446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:44.550 [2024-09-28 23:45:32.648451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:44.550 [2024-09-28 23:45:32.648458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:44.550 [2024-09-28 23:45:32.648463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:44.550 [2024-09-28 23:45:32.648468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:44.550 [2024-09-28 23:45:32.648477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.550 [2024-09-28 23:45:32.648482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:44.550 [2024-09-28 23:45:32.648488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:44.550 [2024-09-28 23:45:32.648493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.550 [2024-09-28 23:45:32.648498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:44.550 [2024-09-28 23:45:32.648502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:44.550 [2024-09-28 23:45:32.648525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.551 [2024-09-28 23:45:32.648530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:44.551 [2024-09-28 23:45:32.648536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:44.551 [2024-09-28 23:45:32.648541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.551 [2024-09-28 23:45:32.648546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:44.551 [2024-09-28 23:45:32.648551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:44.551 [2024-09-28 23:45:32.648555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.551 [2024-09-28 23:45:32.648560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:44.551 [2024-09-28 23:45:32.648565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:44.551 [2024-09-28 23:45:32.648570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.551 [2024-09-28 23:45:32.648575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:44.551 [2024-09-28 23:45:32.648580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:44.551 [2024-09-28 23:45:32.648585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:44.551 [2024-09-28 23:45:32.648590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:44.551 [2024-09-28 23:45:32.648595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:44.551 [2024-09-28 23:45:32.648601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:44.551 [2024-09-28 23:45:32.648605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:44.551 [2024-09-28 23:45:32.648611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:44.551 [2024-09-28 23:45:32.648615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.551 [2024-09-28 23:45:32.648620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:44.551 [2024-09-28 23:45:32.648625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:44.551 [2024-09-28 23:45:32.648631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.551 [2024-09-28 23:45:32.648636] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:44.551 [2024-09-28 23:45:32.648642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:44.551 [2024-09-28 23:45:32.648648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:44.551 [2024-09-28 23:45:32.648653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.551 [2024-09-28 23:45:32.648659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:44.551 [2024-09-28 23:45:32.648664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:44.551 [2024-09-28 23:45:32.648669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:44.551 [2024-09-28 23:45:32.648675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:44.551 [2024-09-28 23:45:32.648680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:44.551 [2024-09-28 23:45:32.648685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:44.551 [2024-09-28 23:45:32.648691] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:44.551 [2024-09-28 23:45:32.648698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.551 [2024-09-28 23:45:32.648704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:44.551 [2024-09-28 23:45:32.648710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:44.551 [2024-09-28 23:45:32.648715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:44.551 [2024-09-28 23:45:32.648720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:44.551 [2024-09-28 23:45:32.648726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:44.551 [2024-09-28 23:45:32.648731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:44.551 [2024-09-28 23:45:32.648736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:44.551 [2024-09-28 23:45:32.648741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:44.551 [2024-09-28 23:45:32.648746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:44.551 [2024-09-28 23:45:32.648752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:44.551 [2024-09-28 23:45:32.648757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:44.551 [2024-09-28 23:45:32.648762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:44.551 [2024-09-28 23:45:32.648767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:44.551 [2024-09-28 23:45:32.648773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:44.551 [2024-09-28 23:45:32.648778] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:44.551 [2024-09-28 23:45:32.648784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.551 [2024-09-28 23:45:32.648791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:44.551 [2024-09-28 23:45:32.648796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:44.551 [2024-09-28 23:45:32.648802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:44.551 [2024-09-28 23:45:32.648807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:44.551 [2024-09-28 23:45:32.648813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.551 [2024-09-28 23:45:32.648818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:44.551 [2024-09-28 23:45:32.648826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:24:44.551 [2024-09-28 23:45:32.648831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.551 [2024-09-28 23:45:32.680854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.551 [2024-09-28 23:45:32.681007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:44.551 [2024-09-28 23:45:32.681079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.991 ms 00:24:44.551 [2024-09-28 23:45:32.681107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.551 [2024-09-28 23:45:32.681225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.551 [2024-09-28 23:45:32.681252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:44.551 [2024-09-28 23:45:32.681281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:44.551 [2024-09-28 23:45:32.681303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.551 [2024-09-28 23:45:32.706087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.551 [2024-09-28 23:45:32.706181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:44.551 [2024-09-28 23:45:32.706220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.658 ms 00:24:44.551 [2024-09-28 23:45:32.706238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.551 [2024-09-28 23:45:32.706272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.551 [2024-09-28 23:45:32.706289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:44.551 [2024-09-28 23:45:32.706304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:44.551 [2024-09-28 23:45:32.706318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.551 [2024-09-28 23:45:32.706394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.551 [2024-09-28 23:45:32.706419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:44.551 [2024-09-28 23:45:32.706435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:44.551 [2024-09-28 23:45:32.706479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.551 [2024-09-28 23:45:32.706592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.551 [2024-09-28 23:45:32.706614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:44.551 [2024-09-28 23:45:32.706643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:44.551 [2024-09-28 23:45:32.706659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.716597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.716687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:44.813 [2024-09-28 23:45:32.716747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.914 ms 00:24:44.813 [2024-09-28 23:45:32.716767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.716866] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:44.813 [2024-09-28 23:45:32.716979] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:44.813 [2024-09-28 23:45:32.717055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.717072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:44.813 [2024-09-28 23:45:32.717087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:24:44.813 [2024-09-28 23:45:32.717102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.726351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.726432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:44.813 [2024-09-28 23:45:32.726472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.227 ms 00:24:44.813 [2024-09-28 23:45:32.726493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.726596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.726619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:44.813 [2024-09-28 23:45:32.726657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:44.813 [2024-09-28 23:45:32.726673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.726708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.726726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:44.813 [2024-09-28 23:45:32.726741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:24:44.813 [2024-09-28 23:45:32.726755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.727209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.727277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:44.813 [2024-09-28 23:45:32.727314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:24:44.813 [2024-09-28 23:45:32.727330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.727352] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:24:44.813 [2024-09-28 23:45:32.727376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.727390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:44.813 [2024-09-28 23:45:32.727405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:44.813 [2024-09-28 23:45:32.727419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.736031] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:44.813 [2024-09-28 23:45:32.736195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.736218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:44.813 [2024-09-28 23:45:32.736270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.716 ms 00:24:44.813 [2024-09-28 23:45:32.736287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.737929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.738001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:44.813 [2024-09-28 23:45:32.738041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:24:44.813 [2024-09-28 23:45:32.738058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.738127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.738194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:44.813 [2024-09-28 23:45:32.738290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:44.813 [2024-09-28 23:45:32.738314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.738354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.738421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:44.813 [2024-09-28 23:45:32.738436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:44.813 [2024-09-28 23:45:32.738473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.738519] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:44.813 [2024-09-28 23:45:32.738538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.738552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:44.813 [2024-09-28 23:45:32.738569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:44.813 [2024-09-28 23:45:32.738613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.756950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.757054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:44.813 [2024-09-28 23:45:32.757095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.309 ms 00:24:44.813 [2024-09-28 23:45:32.757112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.757172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.813 [2024-09-28 23:45:32.757195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:44.813 [2024-09-28 23:45:32.757210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:44.813 [2024-09-28 23:45:32.757225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.813 [2024-09-28 23:45:32.758170] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 114.080 ms, result 0 00:25:48.071  Copying: 12/1024 [MB] (12 MBps) Copying: 27/1024 [MB] (14 MBps) Copying: 39/1024 [MB] (11 MBps) Copying: 55/1024 [MB] (16 MBps) Copying: 69/1024 [MB] (13 MBps) Copying: 84/1024 [MB] (14 MBps) Copying: 97/1024 [MB] (13 MBps) Copying: 110/1024 [MB] (12 MBps) Copying: 131/1024 [MB] (20 MBps) Copying: 143/1024 [MB] (12 MBps) Copying: 160/1024 [MB] (16 MBps) Copying: 177/1024 [MB] (17 MBps) Copying: 194/1024 [MB] (17 MBps) Copying: 223/1024 [MB] (28 MBps) Copying: 236/1024 [MB] (13 MBps) Copying: 249/1024 [MB] (12 MBps) Copying: 260/1024 [MB] (11 MBps) Copying: 277/1024 [MB] (16 MBps) Copying: 297/1024 [MB] (20 MBps) Copying: 321/1024 [MB] (23 MBps) Copying: 335/1024 [MB] (13 MBps) Copying: 351/1024 [MB] (16 MBps) Copying: 374/1024 [MB] (22 MBps) Copying: 403/1024 [MB] (29 MBps) Copying: 418/1024 [MB] (14 MBps) Copying: 436/1024 [MB] (17 MBps) Copying: 462/1024 [MB] (25 MBps) Copying: 489/1024 [MB] (27 MBps) Copying: 513/1024 [MB] (23 MBps) Copying: 535/1024 [MB] (21 MBps) Copying: 556/1024 [MB] (21 MBps) Copying: 579/1024 [MB] (22 MBps) Copying: 595/1024 [MB] (15 MBps) Copying: 605/1024 [MB] (10 MBps) Copying: 616/1024 [MB] (11 MBps) Copying: 635/1024 [MB] (18 MBps) Copying: 651/1024 [MB] (16 MBps) Copying: 662/1024 [MB] (10 MBps) Copying: 672/1024 [MB] (10 MBps) Copying: 683/1024 [MB] (10 MBps) Copying: 693/1024 [MB] (10 MBps) Copying: 704/1024 [MB] (10 MBps) Copying: 715/1024 [MB] (10 MBps) Copying: 728/1024 [MB] (12 MBps) Copying: 739/1024 [MB] (11 MBps) Copying: 756/1024 [MB] (16 MBps) Copying: 772/1024 [MB] (15 MBps) Copying: 783/1024 [MB] (11 MBps) Copying: 796/1024 [MB] (12 MBps) Copying: 810/1024 [MB] (14 MBps) Copying: 827/1024 [MB] (16 MBps) Copying: 840/1024 [MB] (13 MBps) Copying: 856/1024 [MB] (16 MBps) Copying: 875/1024 [MB] (19 MBps) Copying: 899/1024 [MB] (23 MBps) Copying: 919/1024 [MB] (20 MBps) Copying: 935/1024 [MB] (16 MBps) Copying: 953/1024 [MB] (17 MBps) Copying: 968/1024 [MB] (14 MBps) Copying: 982/1024 [MB] (14 MBps) Copying: 993/1024 [MB] (11 MBps) Copying: 1009/1024 [MB] (15 MBps) Copying: 1023/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 16 MBps)[2024-09-28 23:46:36.235140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.071 [2024-09-28 23:46:36.235228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:48.071 [2024-09-28 23:46:36.235244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:48.071 [2024-09-28 23:46:36.235254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.071 [2024-09-28 23:46:36.235280] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:48.333 [2024-09-28 23:46:36.239231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.333 [2024-09-28 23:46:36.239281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:48.333 [2024-09-28 23:46:36.239294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.934 ms 00:25:48.333 [2024-09-28 23:46:36.239302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.333 [2024-09-28 23:46:36.239563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.333 [2024-09-28 23:46:36.239579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:48.333 [2024-09-28 23:46:36.239588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:25:48.333 [2024-09-28 23:46:36.239597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.333 [2024-09-28 23:46:36.239628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.333 [2024-09-28 23:46:36.239638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:25:48.333 [2024-09-28 23:46:36.239647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:48.333 [2024-09-28 23:46:36.239656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.334 [2024-09-28 23:46:36.239719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.334 [2024-09-28 23:46:36.239729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:25:48.334 [2024-09-28 23:46:36.239740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:48.334 [2024-09-28 23:46:36.239748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.334 [2024-09-28 23:46:36.239762] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:48.334 [2024-09-28 23:46:36.239775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.239997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:48.334 [2024-09-28 23:46:36.240453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:48.335 [2024-09-28 23:46:36.240593] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:48.335 [2024-09-28 23:46:36.240601] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a13dbf6d-6874-41aa-9963-ba6a63aee207 00:25:48.335 [2024-09-28 23:46:36.240610] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:48.335 [2024-09-28 23:46:36.240617] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:25:48.335 [2024-09-28 23:46:36.240645] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:48.335 [2024-09-28 23:46:36.240653] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:48.335 [2024-09-28 23:46:36.240661] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:48.335 [2024-09-28 23:46:36.240696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:48.335 [2024-09-28 23:46:36.240704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:48.335 [2024-09-28 23:46:36.240712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:48.335 [2024-09-28 23:46:36.240719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:48.335 [2024-09-28 23:46:36.240726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.335 [2024-09-28 23:46:36.240736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:48.335 [2024-09-28 23:46:36.240763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:25:48.335 [2024-09-28 23:46:36.240771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.256416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.335 [2024-09-28 23:46:36.256671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:48.335 [2024-09-28 23:46:36.256695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.626 ms 00:25:48.335 [2024-09-28 23:46:36.256712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.257112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.335 [2024-09-28 23:46:36.257122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:48.335 [2024-09-28 23:46:36.257132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:25:48.335 [2024-09-28 23:46:36.257141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.290433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.290489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:48.335 [2024-09-28 23:46:36.290503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.290539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.290622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.290632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:48.335 [2024-09-28 23:46:36.290642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.290651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.290721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.290732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:48.335 [2024-09-28 23:46:36.290742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.290752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.290774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.290784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:48.335 [2024-09-28 23:46:36.290794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.290804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.376260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.376474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:48.335 [2024-09-28 23:46:36.376498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.376549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.446320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.446382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:48.335 [2024-09-28 23:46:36.446397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.446406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.446493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.446504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:48.335 [2024-09-28 23:46:36.446544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.446553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.446598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.446609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:48.335 [2024-09-28 23:46:36.446618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.446626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.446705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.446716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:48.335 [2024-09-28 23:46:36.446724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.446733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.446762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.446774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:48.335 [2024-09-28 23:46:36.446783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.446791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.446834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.446843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:48.335 [2024-09-28 23:46:36.446851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.446859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.446906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:48.335 [2024-09-28 23:46:36.446919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:48.335 [2024-09-28 23:46:36.446928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:48.335 [2024-09-28 23:46:36.446936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.335 [2024-09-28 23:46:36.447087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 211.912 ms, result 0 00:25:49.275 00:25:49.276 00:25:49.276 23:46:37 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:51.820 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:51.820 23:46:39 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:51.820 [2024-09-28 23:46:39.703141] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:51.820 [2024-09-28 23:46:39.703533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80289 ] 00:25:51.820 [2024-09-28 23:46:39.859295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.079 [2024-09-28 23:46:40.098973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.339 [2024-09-28 23:46:40.387215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:52.339 [2024-09-28 23:46:40.387300] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:52.600 [2024-09-28 23:46:40.548959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.600 [2024-09-28 23:46:40.549020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:52.600 [2024-09-28 23:46:40.549035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:52.600 [2024-09-28 23:46:40.549048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.600 [2024-09-28 23:46:40.549103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.600 [2024-09-28 23:46:40.549114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.600 [2024-09-28 23:46:40.549123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:52.600 [2024-09-28 23:46:40.549131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.600 [2024-09-28 23:46:40.549152] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:52.600 [2024-09-28 23:46:40.550000] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:52.601 [2024-09-28 23:46:40.550040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.601 [2024-09-28 23:46:40.550049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.601 [2024-09-28 23:46:40.550059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:25:52.601 [2024-09-28 23:46:40.550068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.601 [2024-09-28 23:46:40.550392] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:25:52.601 [2024-09-28 23:46:40.550428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.601 [2024-09-28 23:46:40.550437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:52.601 [2024-09-28 23:46:40.550447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:52.601 [2024-09-28 23:46:40.550454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.601 [2024-09-28 23:46:40.550506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.601 [2024-09-28 23:46:40.550533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:52.601 [2024-09-28 23:46:40.550542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:52.601 [2024-09-28 23:46:40.550553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.601 [2024-09-28 23:46:40.550821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.601 [2024-09-28 23:46:40.550832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.601 [2024-09-28 23:46:40.550840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:25:52.601 [2024-09-28 23:46:40.550848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.601 [2024-09-28 23:46:40.550916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.601 [2024-09-28 23:46:40.550932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.601 [2024-09-28 23:46:40.550943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:52.601 [2024-09-28 23:46:40.550950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.601 [2024-09-28 23:46:40.550972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.601 [2024-09-28 23:46:40.550981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:52.601 [2024-09-28 23:46:40.550989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:52.601 [2024-09-28 23:46:40.551015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.601 [2024-09-28 23:46:40.551036] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:52.601 [2024-09-28 23:46:40.555306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.601 [2024-09-28 23:46:40.555347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.601 [2024-09-28 23:46:40.555357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.275 ms 00:25:52.601 [2024-09-28 23:46:40.555365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.601 [2024-09-28 23:46:40.555399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.601 [2024-09-28 23:46:40.555408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:52.601 [2024-09-28 23:46:40.555419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:52.601 [2024-09-28 23:46:40.555426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.601 [2024-09-28 23:46:40.555483] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:52.601 [2024-09-28 23:46:40.555531] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:52.601 [2024-09-28 23:46:40.555569] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:52.601 [2024-09-28 23:46:40.555585] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:52.601 [2024-09-28 23:46:40.555691] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:52.601 [2024-09-28 23:46:40.555704] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:52.601 [2024-09-28 23:46:40.555716] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:52.601 [2024-09-28 23:46:40.555726] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:52.601 [2024-09-28 23:46:40.555735] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:52.601 [2024-09-28 23:46:40.555743] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:52.601 [2024-09-28 23:46:40.555750] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:52.601 [2024-09-28 23:46:40.555758] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:52.601 [2024-09-28 23:46:40.555765] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:52.601 [2024-09-28 23:46:40.555773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.601 [2024-09-28 23:46:40.555781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:52.601 [2024-09-28 23:46:40.555789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:25:52.601 [2024-09-28 23:46:40.555799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.601 [2024-09-28 23:46:40.555881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.601 [2024-09-28 23:46:40.555889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:52.601 [2024-09-28 23:46:40.555897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:52.601 [2024-09-28 23:46:40.555904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.601 [2024-09-28 23:46:40.556007] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:52.601 [2024-09-28 23:46:40.556019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:52.601 [2024-09-28 23:46:40.556028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:52.601 [2024-09-28 23:46:40.556036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:52.601 [2024-09-28 23:46:40.556054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:52.601 [2024-09-28 23:46:40.556068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:52.601 [2024-09-28 23:46:40.556075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:52.601 [2024-09-28 23:46:40.556091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:52.601 [2024-09-28 23:46:40.556099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:52.601 [2024-09-28 23:46:40.556107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:52.601 [2024-09-28 23:46:40.556114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:52.601 [2024-09-28 23:46:40.556121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:52.601 [2024-09-28 23:46:40.556134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:52.601 [2024-09-28 23:46:40.556148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:52.601 [2024-09-28 23:46:40.556156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:52.601 [2024-09-28 23:46:40.556169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.601 [2024-09-28 23:46:40.556182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:52.601 [2024-09-28 23:46:40.556189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.601 [2024-09-28 23:46:40.556201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:52.601 [2024-09-28 23:46:40.556208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.601 [2024-09-28 23:46:40.556221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:52.601 [2024-09-28 23:46:40.556228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.601 [2024-09-28 23:46:40.556241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:52.601 [2024-09-28 23:46:40.556247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:52.601 [2024-09-28 23:46:40.556260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:52.601 [2024-09-28 23:46:40.556267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:52.601 [2024-09-28 23:46:40.556273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:52.601 [2024-09-28 23:46:40.556279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:52.601 [2024-09-28 23:46:40.556286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:52.601 [2024-09-28 23:46:40.556292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:52.601 [2024-09-28 23:46:40.556306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:52.601 [2024-09-28 23:46:40.556313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556321] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:52.601 [2024-09-28 23:46:40.556329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:52.601 [2024-09-28 23:46:40.556337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:52.601 [2024-09-28 23:46:40.556344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.601 [2024-09-28 23:46:40.556352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:52.601 [2024-09-28 23:46:40.556359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:52.602 [2024-09-28 23:46:40.556366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:52.602 [2024-09-28 23:46:40.556374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:52.602 [2024-09-28 23:46:40.556380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:52.602 [2024-09-28 23:46:40.556387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:52.602 [2024-09-28 23:46:40.556395] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:52.602 [2024-09-28 23:46:40.556404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.602 [2024-09-28 23:46:40.556413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:52.602 [2024-09-28 23:46:40.556420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:52.602 [2024-09-28 23:46:40.556427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:52.602 [2024-09-28 23:46:40.556434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:52.602 [2024-09-28 23:46:40.556441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:52.602 [2024-09-28 23:46:40.556448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:52.602 [2024-09-28 23:46:40.556456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:52.602 [2024-09-28 23:46:40.556463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:52.602 [2024-09-28 23:46:40.556470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:52.602 [2024-09-28 23:46:40.556477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:52.602 [2024-09-28 23:46:40.556484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:52.602 [2024-09-28 23:46:40.556491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:52.602 [2024-09-28 23:46:40.556498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:52.602 [2024-09-28 23:46:40.556505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:52.602 [2024-09-28 23:46:40.556526] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:52.602 [2024-09-28 23:46:40.556535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.602 [2024-09-28 23:46:40.556545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:52.602 [2024-09-28 23:46:40.556552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:52.602 [2024-09-28 23:46:40.556560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:52.602 [2024-09-28 23:46:40.556567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:52.602 [2024-09-28 23:46:40.556577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.556585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:52.602 [2024-09-28 23:46:40.556596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:25:52.602 [2024-09-28 23:46:40.556603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.596073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.596121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:52.602 [2024-09-28 23:46:40.596138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.414 ms 00:25:52.602 [2024-09-28 23:46:40.596146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.596239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.596249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:52.602 [2024-09-28 23:46:40.596261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:52.602 [2024-09-28 23:46:40.596269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.631149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.631341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:52.602 [2024-09-28 23:46:40.631362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.817 ms 00:25:52.602 [2024-09-28 23:46:40.631372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.631411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.631420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:52.602 [2024-09-28 23:46:40.631429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:52.602 [2024-09-28 23:46:40.631437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.631569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.631588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:52.602 [2024-09-28 23:46:40.631598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:52.602 [2024-09-28 23:46:40.631606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.631733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.631743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:52.602 [2024-09-28 23:46:40.631751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:25:52.602 [2024-09-28 23:46:40.631759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.646314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.646486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:52.602 [2024-09-28 23:46:40.646527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.535 ms 00:25:52.602 [2024-09-28 23:46:40.646536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.646693] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:52.602 [2024-09-28 23:46:40.646706] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:52.602 [2024-09-28 23:46:40.646716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.646724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:52.602 [2024-09-28 23:46:40.646732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:52.602 [2024-09-28 23:46:40.646740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.660262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.660407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:52.602 [2024-09-28 23:46:40.660467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.500 ms 00:25:52.602 [2024-09-28 23:46:40.660496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.660656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.660682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:52.602 [2024-09-28 23:46:40.660703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:25:52.602 [2024-09-28 23:46:40.660722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.660788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.661081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:52.602 [2024-09-28 23:46:40.661123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:52.602 [2024-09-28 23:46:40.661142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.661803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.661918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:52.602 [2024-09-28 23:46:40.661980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:25:52.602 [2024-09-28 23:46:40.662004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.662037] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:25:52.602 [2024-09-28 23:46:40.662069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.662088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:52.602 [2024-09-28 23:46:40.662109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:52.602 [2024-09-28 23:46:40.662128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.674675] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:52.602 [2024-09-28 23:46:40.674962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.675013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:52.602 [2024-09-28 23:46:40.675153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.802 ms 00:25:52.602 [2024-09-28 23:46:40.675177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.677254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.677380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:52.602 [2024-09-28 23:46:40.677439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:25:52.602 [2024-09-28 23:46:40.677463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.677589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.677666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:52.602 [2024-09-28 23:46:40.677691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:52.602 [2024-09-28 23:46:40.677710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.602 [2024-09-28 23:46:40.677777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.602 [2024-09-28 23:46:40.677831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:52.602 [2024-09-28 23:46:40.677878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:52.602 [2024-09-28 23:46:40.677900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.603 [2024-09-28 23:46:40.677953] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:52.603 [2024-09-28 23:46:40.678004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.603 [2024-09-28 23:46:40.678026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:52.603 [2024-09-28 23:46:40.678051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:52.603 [2024-09-28 23:46:40.678070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.603 [2024-09-28 23:46:40.704474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.603 [2024-09-28 23:46:40.704658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:52.603 [2024-09-28 23:46:40.704717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.342 ms 00:25:52.603 [2024-09-28 23:46:40.704740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.603 [2024-09-28 23:46:40.705098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.603 [2024-09-28 23:46:40.705186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:52.603 [2024-09-28 23:46:40.705285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:52.603 [2024-09-28 23:46:40.705309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.603 [2024-09-28 23:46:40.706566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.123 ms, result 0 00:26:39.621  Copying: 10/1024 [MB] (10 MBps) Copying: 20/1024 [MB] (10 MBps) Copying: 30/1024 [MB] (10 MBps) Copying: 40/1024 [MB] (10 MBps) Copying: 51/1024 [MB] (10 MBps) Copying: 61/1024 [MB] (10 MBps) Copying: 79/1024 [MB] (18 MBps) Copying: 96/1024 [MB] (17 MBps) Copying: 138/1024 [MB] (42 MBps) Copying: 164/1024 [MB] (25 MBps) Copying: 184/1024 [MB] (20 MBps) Copying: 212/1024 [MB] (27 MBps) Copying: 235/1024 [MB] (23 MBps) Copying: 256/1024 [MB] (20 MBps) Copying: 283/1024 [MB] (26 MBps) Copying: 306/1024 [MB] (23 MBps) Copying: 323/1024 [MB] (16 MBps) Copying: 349/1024 [MB] (25 MBps) Copying: 396/1024 [MB] (46 MBps) Copying: 414/1024 [MB] (18 MBps) Copying: 452/1024 [MB] (37 MBps) Copying: 471/1024 [MB] (19 MBps) Copying: 485/1024 [MB] (13 MBps) Copying: 524/1024 [MB] (39 MBps) Copying: 551/1024 [MB] (26 MBps) Copying: 569/1024 [MB] (18 MBps) Copying: 587/1024 [MB] (18 MBps) Copying: 609/1024 [MB] (22 MBps) Copying: 627/1024 [MB] (17 MBps) Copying: 649/1024 [MB] (21 MBps) Copying: 671/1024 [MB] (22 MBps) Copying: 685/1024 [MB] (13 MBps) Copying: 704/1024 [MB] (18 MBps) Copying: 718/1024 [MB] (14 MBps) Copying: 732/1024 [MB] (14 MBps) Copying: 766/1024 [MB] (33 MBps) Copying: 798/1024 [MB] (32 MBps) Copying: 834/1024 [MB] (36 MBps) Copying: 860/1024 [MB] (26 MBps) Copying: 871/1024 [MB] (10 MBps) Copying: 895/1024 [MB] (24 MBps) Copying: 916/1024 [MB] (20 MBps) Copying: 936/1024 [MB] (19 MBps) Copying: 950/1024 [MB] (14 MBps) Copying: 995/1024 [MB] (44 MBps) Copying: 1023/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 21 MBps)[2024-09-28 23:47:27.707240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.621 [2024-09-28 23:47:27.707331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:39.621 [2024-09-28 23:47:27.707349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:39.621 [2024-09-28 23:47:27.707360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.621 [2024-09-28 23:47:27.710442] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:39.621 [2024-09-28 23:47:27.715880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.621 [2024-09-28 23:47:27.715932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:39.621 [2024-09-28 23:47:27.715945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.239 ms 00:26:39.621 [2024-09-28 23:47:27.715954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.621 [2024-09-28 23:47:27.726894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.621 [2024-09-28 23:47:27.726948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:39.621 [2024-09-28 23:47:27.726961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.709 ms 00:26:39.621 [2024-09-28 23:47:27.726970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.621 [2024-09-28 23:47:27.727001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.621 [2024-09-28 23:47:27.727012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:39.621 [2024-09-28 23:47:27.727030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:39.621 [2024-09-28 23:47:27.727053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.621 [2024-09-28 23:47:27.727114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.621 [2024-09-28 23:47:27.727124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:39.621 [2024-09-28 23:47:27.727133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:39.621 [2024-09-28 23:47:27.727142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.621 [2024-09-28 23:47:27.727156] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:39.621 [2024-09-28 23:47:27.727169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:26:39.621 [2024-09-28 23:47:27.727179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:39.621 [2024-09-28 23:47:27.727187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:39.621 [2024-09-28 23:47:27.727195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:39.622 [2024-09-28 23:47:27.727953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:39.623 [2024-09-28 23:47:27.727961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:39.623 [2024-09-28 23:47:27.727968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:39.623 [2024-09-28 23:47:27.727976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:39.623 [2024-09-28 23:47:27.727984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:39.623 [2024-09-28 23:47:27.727992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:39.623 [2024-09-28 23:47:27.727999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:39.623 [2024-09-28 23:47:27.728015] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:39.623 [2024-09-28 23:47:27.728024] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a13dbf6d-6874-41aa-9963-ba6a63aee207 00:26:39.623 [2024-09-28 23:47:27.728035] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:26:39.623 [2024-09-28 23:47:27.728042] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:26:39.623 [2024-09-28 23:47:27.728050] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:26:39.623 [2024-09-28 23:47:27.728058] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:26:39.623 [2024-09-28 23:47:27.728065] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:39.623 [2024-09-28 23:47:27.728073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:39.623 [2024-09-28 23:47:27.728080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:39.623 [2024-09-28 23:47:27.728088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:39.623 [2024-09-28 23:47:27.728094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:39.623 [2024-09-28 23:47:27.728102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.623 [2024-09-28 23:47:27.728110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:39.623 [2024-09-28 23:47:27.728119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:26:39.623 [2024-09-28 23:47:27.728127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.623 [2024-09-28 23:47:27.742233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.623 [2024-09-28 23:47:27.742424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:39.623 [2024-09-28 23:47:27.742444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.089 ms 00:26:39.623 [2024-09-28 23:47:27.742453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.623 [2024-09-28 23:47:27.742898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.623 [2024-09-28 23:47:27.742910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:39.623 [2024-09-28 23:47:27.742920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:26:39.623 [2024-09-28 23:47:27.742935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.623 [2024-09-28 23:47:27.775136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.623 [2024-09-28 23:47:27.775192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:39.623 [2024-09-28 23:47:27.775204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.623 [2024-09-28 23:47:27.775212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.623 [2024-09-28 23:47:27.775281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.623 [2024-09-28 23:47:27.775291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:39.623 [2024-09-28 23:47:27.775300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.623 [2024-09-28 23:47:27.775316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.623 [2024-09-28 23:47:27.775399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.623 [2024-09-28 23:47:27.775410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:39.623 [2024-09-28 23:47:27.775418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.623 [2024-09-28 23:47:27.775426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.623 [2024-09-28 23:47:27.775442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.623 [2024-09-28 23:47:27.775451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:39.623 [2024-09-28 23:47:27.775459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.623 [2024-09-28 23:47:27.775467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.883 [2024-09-28 23:47:27.860166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.883 [2024-09-28 23:47:27.860223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:39.883 [2024-09-28 23:47:27.860244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.883 [2024-09-28 23:47:27.860252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.883 [2024-09-28 23:47:27.929475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.883 [2024-09-28 23:47:27.929550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:39.883 [2024-09-28 23:47:27.929563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.883 [2024-09-28 23:47:27.929579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.883 [2024-09-28 23:47:27.929641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.883 [2024-09-28 23:47:27.929652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:39.883 [2024-09-28 23:47:27.929661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.883 [2024-09-28 23:47:27.929669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.883 [2024-09-28 23:47:27.929726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.883 [2024-09-28 23:47:27.929736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:39.883 [2024-09-28 23:47:27.929745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.883 [2024-09-28 23:47:27.929753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.883 [2024-09-28 23:47:27.929838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.883 [2024-09-28 23:47:27.929849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:39.883 [2024-09-28 23:47:27.929858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.883 [2024-09-28 23:47:27.929866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.883 [2024-09-28 23:47:27.929898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.883 [2024-09-28 23:47:27.929908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:39.883 [2024-09-28 23:47:27.929917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.883 [2024-09-28 23:47:27.929925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.883 [2024-09-28 23:47:27.929970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.883 [2024-09-28 23:47:27.929980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:39.883 [2024-09-28 23:47:27.929989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.883 [2024-09-28 23:47:27.929997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.883 [2024-09-28 23:47:27.930043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:39.883 [2024-09-28 23:47:27.930053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:39.883 [2024-09-28 23:47:27.930061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:39.883 [2024-09-28 23:47:27.930069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.883 [2024-09-28 23:47:27.930203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 225.887 ms, result 0 00:26:41.265 00:26:41.265 00:26:41.265 23:47:29 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:41.265 [2024-09-28 23:47:29.290753] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:41.265 [2024-09-28 23:47:29.290856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80794 ] 00:26:41.524 [2024-09-28 23:47:29.435834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.524 [2024-09-28 23:47:29.625061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.783 [2024-09-28 23:47:29.890034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:41.783 [2024-09-28 23:47:29.890108] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:42.045 [2024-09-28 23:47:30.048301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.048501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:42.045 [2024-09-28 23:47:30.048533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:42.045 [2024-09-28 23:47:30.048547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.048604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.048614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.045 [2024-09-28 23:47:30.048623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:42.045 [2024-09-28 23:47:30.048630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.048650] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:42.045 [2024-09-28 23:47:30.049364] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:42.045 [2024-09-28 23:47:30.049384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.049392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.045 [2024-09-28 23:47:30.049400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:26:42.045 [2024-09-28 23:47:30.049407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.049664] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:42.045 [2024-09-28 23:47:30.049691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.049700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:42.045 [2024-09-28 23:47:30.049709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:42.045 [2024-09-28 23:47:30.049716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.049758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.049766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:42.045 [2024-09-28 23:47:30.049774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:42.045 [2024-09-28 23:47:30.049783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.050037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.050048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.045 [2024-09-28 23:47:30.050056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:26:42.045 [2024-09-28 23:47:30.050064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.050185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.050196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.045 [2024-09-28 23:47:30.050207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:42.045 [2024-09-28 23:47:30.050214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.050237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.050245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:42.045 [2024-09-28 23:47:30.050253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:42.045 [2024-09-28 23:47:30.050260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.050276] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:42.045 [2024-09-28 23:47:30.053951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.053981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.045 [2024-09-28 23:47:30.053990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.679 ms 00:26:42.045 [2024-09-28 23:47:30.053997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.054029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.054036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:42.045 [2024-09-28 23:47:30.054047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:42.045 [2024-09-28 23:47:30.054054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.054099] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:42.045 [2024-09-28 23:47:30.054120] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:42.045 [2024-09-28 23:47:30.054153] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:42.045 [2024-09-28 23:47:30.054167] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:42.045 [2024-09-28 23:47:30.054267] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:42.045 [2024-09-28 23:47:30.054279] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:42.045 [2024-09-28 23:47:30.054290] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:42.045 [2024-09-28 23:47:30.054300] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:42.045 [2024-09-28 23:47:30.054308] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:42.045 [2024-09-28 23:47:30.054315] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:42.045 [2024-09-28 23:47:30.054322] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:42.045 [2024-09-28 23:47:30.054329] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:42.045 [2024-09-28 23:47:30.054337] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:42.045 [2024-09-28 23:47:30.054344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.054351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:42.045 [2024-09-28 23:47:30.054358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:26:42.045 [2024-09-28 23:47:30.054368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.054449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.045 [2024-09-28 23:47:30.054457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:42.045 [2024-09-28 23:47:30.054464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:42.045 [2024-09-28 23:47:30.054471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.045 [2024-09-28 23:47:30.054584] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:42.045 [2024-09-28 23:47:30.054595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:42.045 [2024-09-28 23:47:30.054603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.045 [2024-09-28 23:47:30.054611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.045 [2024-09-28 23:47:30.054621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:42.045 [2024-09-28 23:47:30.054628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:42.045 [2024-09-28 23:47:30.054635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:42.045 [2024-09-28 23:47:30.054643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:42.045 [2024-09-28 23:47:30.054651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:42.045 [2024-09-28 23:47:30.054657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.045 [2024-09-28 23:47:30.054664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:42.045 [2024-09-28 23:47:30.054670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:42.045 [2024-09-28 23:47:30.054677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.045 [2024-09-28 23:47:30.054683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:42.045 [2024-09-28 23:47:30.054690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:42.045 [2024-09-28 23:47:30.054702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.045 [2024-09-28 23:47:30.054708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:42.045 [2024-09-28 23:47:30.054716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:42.045 [2024-09-28 23:47:30.054723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.045 [2024-09-28 23:47:30.054729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:42.045 [2024-09-28 23:47:30.054736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:42.045 [2024-09-28 23:47:30.054743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.046 [2024-09-28 23:47:30.054749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:42.046 [2024-09-28 23:47:30.054756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:42.046 [2024-09-28 23:47:30.054762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.046 [2024-09-28 23:47:30.054768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:42.046 [2024-09-28 23:47:30.054775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:42.046 [2024-09-28 23:47:30.054781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.046 [2024-09-28 23:47:30.054788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:42.046 [2024-09-28 23:47:30.054794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:42.046 [2024-09-28 23:47:30.054800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.046 [2024-09-28 23:47:30.054806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:42.046 [2024-09-28 23:47:30.054812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:42.046 [2024-09-28 23:47:30.054819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.046 [2024-09-28 23:47:30.054826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:42.046 [2024-09-28 23:47:30.054832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:42.046 [2024-09-28 23:47:30.054838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.046 [2024-09-28 23:47:30.054844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:42.046 [2024-09-28 23:47:30.054851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:42.046 [2024-09-28 23:47:30.054857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.046 [2024-09-28 23:47:30.054863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:42.046 [2024-09-28 23:47:30.054870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:42.046 [2024-09-28 23:47:30.054877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.046 [2024-09-28 23:47:30.054883] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:42.046 [2024-09-28 23:47:30.054890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:42.046 [2024-09-28 23:47:30.054897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.046 [2024-09-28 23:47:30.054904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.046 [2024-09-28 23:47:30.054912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:42.046 [2024-09-28 23:47:30.054918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:42.046 [2024-09-28 23:47:30.054925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:42.046 [2024-09-28 23:47:30.054932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:42.046 [2024-09-28 23:47:30.054939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:42.046 [2024-09-28 23:47:30.054945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:42.046 [2024-09-28 23:47:30.054954] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:42.046 [2024-09-28 23:47:30.054962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.046 [2024-09-28 23:47:30.054970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:42.046 [2024-09-28 23:47:30.054977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:42.046 [2024-09-28 23:47:30.054984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:42.046 [2024-09-28 23:47:30.054991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:42.046 [2024-09-28 23:47:30.054998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:42.046 [2024-09-28 23:47:30.055005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:42.046 [2024-09-28 23:47:30.055012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:42.046 [2024-09-28 23:47:30.055019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:42.046 [2024-09-28 23:47:30.055026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:42.046 [2024-09-28 23:47:30.055042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:42.046 [2024-09-28 23:47:30.055049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:42.046 [2024-09-28 23:47:30.055056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:42.046 [2024-09-28 23:47:30.055063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:42.046 [2024-09-28 23:47:30.055070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:42.046 [2024-09-28 23:47:30.055077] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:42.046 [2024-09-28 23:47:30.055085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.046 [2024-09-28 23:47:30.055094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:42.046 [2024-09-28 23:47:30.055101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:42.046 [2024-09-28 23:47:30.055108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:42.046 [2024-09-28 23:47:30.055115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:42.046 [2024-09-28 23:47:30.055123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.055130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:42.046 [2024-09-28 23:47:30.055139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:26:42.046 [2024-09-28 23:47:30.055146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.092013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.092059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.046 [2024-09-28 23:47:30.092074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.825 ms 00:26:42.046 [2024-09-28 23:47:30.092083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.092177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.092187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:42.046 [2024-09-28 23:47:30.092198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:42.046 [2024-09-28 23:47:30.092206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.123521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.123558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.046 [2024-09-28 23:47:30.123569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.257 ms 00:26:42.046 [2024-09-28 23:47:30.123576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.123610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.123619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.046 [2024-09-28 23:47:30.123627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:42.046 [2024-09-28 23:47:30.123635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.123724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.123739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.046 [2024-09-28 23:47:30.123747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:42.046 [2024-09-28 23:47:30.123754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.123866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.123875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.046 [2024-09-28 23:47:30.123883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:26:42.046 [2024-09-28 23:47:30.123891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.136708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.136741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.046 [2024-09-28 23:47:30.136750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.799 ms 00:26:42.046 [2024-09-28 23:47:30.136758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.136872] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:42.046 [2024-09-28 23:47:30.136884] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:42.046 [2024-09-28 23:47:30.136894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.136902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:42.046 [2024-09-28 23:47:30.136909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:42.046 [2024-09-28 23:47:30.136917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.149182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.149328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:42.046 [2024-09-28 23:47:30.149346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.249 ms 00:26:42.046 [2024-09-28 23:47:30.149358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.149476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.149484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:42.046 [2024-09-28 23:47:30.149492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:26:42.046 [2024-09-28 23:47:30.149499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.046 [2024-09-28 23:47:30.149575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.046 [2024-09-28 23:47:30.149586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:42.046 [2024-09-28 23:47:30.149595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:26:42.046 [2024-09-28 23:47:30.149602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.047 [2024-09-28 23:47:30.150165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.047 [2024-09-28 23:47:30.150182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:42.047 [2024-09-28 23:47:30.150189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:26:42.047 [2024-09-28 23:47:30.150197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.047 [2024-09-28 23:47:30.150213] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:42.047 [2024-09-28 23:47:30.150222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.047 [2024-09-28 23:47:30.150229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:42.047 [2024-09-28 23:47:30.150237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:42.047 [2024-09-28 23:47:30.150243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.047 [2024-09-28 23:47:30.161688] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:42.047 [2024-09-28 23:47:30.161821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.047 [2024-09-28 23:47:30.161835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:42.047 [2024-09-28 23:47:30.161844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.560 ms 00:26:42.047 [2024-09-28 23:47:30.161852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.047 [2024-09-28 23:47:30.164103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.047 [2024-09-28 23:47:30.164234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:42.047 [2024-09-28 23:47:30.164249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.231 ms 00:26:42.047 [2024-09-28 23:47:30.164256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.047 [2024-09-28 23:47:30.164327] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:26:42.047 [2024-09-28 23:47:30.164825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.047 [2024-09-28 23:47:30.164835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:42.047 [2024-09-28 23:47:30.164843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:26:42.047 [2024-09-28 23:47:30.164855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.047 [2024-09-28 23:47:30.164879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.047 [2024-09-28 23:47:30.164888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:42.047 [2024-09-28 23:47:30.164896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:42.047 [2024-09-28 23:47:30.164902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.047 [2024-09-28 23:47:30.164933] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:42.047 [2024-09-28 23:47:30.164943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.047 [2024-09-28 23:47:30.164953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:42.047 [2024-09-28 23:47:30.164960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:42.047 [2024-09-28 23:47:30.164967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.047 [2024-09-28 23:47:30.190214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.047 [2024-09-28 23:47:30.190259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:42.047 [2024-09-28 23:47:30.190271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.228 ms 00:26:42.047 [2024-09-28 23:47:30.190280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.047 [2024-09-28 23:47:30.190368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.047 [2024-09-28 23:47:30.190378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:42.047 [2024-09-28 23:47:30.190388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:42.047 [2024-09-28 23:47:30.190395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.047 [2024-09-28 23:47:30.191494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.740 ms, result 0 00:27:46.506  Copying: 19/1024 [MB] (19 MBps) Copying: 34/1024 [MB] (15 MBps) Copying: 48/1024 [MB] (13 MBps) Copying: 59/1024 [MB] (10 MBps) Copying: 78/1024 [MB] (18 MBps) Copying: 94/1024 [MB] (16 MBps) Copying: 117/1024 [MB] (22 MBps) Copying: 140/1024 [MB] (22 MBps) Copying: 162/1024 [MB] (22 MBps) Copying: 183/1024 [MB] (21 MBps) Copying: 200/1024 [MB] (16 MBps) Copying: 216/1024 [MB] (16 MBps) Copying: 229/1024 [MB] (13 MBps) Copying: 247/1024 [MB] (17 MBps) Copying: 260/1024 [MB] (13 MBps) Copying: 283/1024 [MB] (22 MBps) Copying: 302/1024 [MB] (18 MBps) Copying: 325/1024 [MB] (23 MBps) Copying: 339/1024 [MB] (13 MBps) Copying: 350/1024 [MB] (10 MBps) Copying: 361/1024 [MB] (11 MBps) Copying: 373/1024 [MB] (12 MBps) Copying: 385/1024 [MB] (12 MBps) Copying: 399/1024 [MB] (13 MBps) Copying: 410/1024 [MB] (11 MBps) Copying: 426/1024 [MB] (15 MBps) Copying: 446/1024 [MB] (19 MBps) Copying: 465/1024 [MB] (18 MBps) Copying: 478/1024 [MB] (13 MBps) Copying: 488/1024 [MB] (10 MBps) Copying: 506/1024 [MB] (17 MBps) Copying: 526/1024 [MB] (20 MBps) Copying: 539/1024 [MB] (13 MBps) Copying: 551/1024 [MB] (12 MBps) Copying: 563/1024 [MB] (12 MBps) Copying: 581/1024 [MB] (17 MBps) Copying: 595/1024 [MB] (13 MBps) Copying: 610/1024 [MB] (15 MBps) Copying: 626/1024 [MB] (15 MBps) Copying: 642/1024 [MB] (15 MBps) Copying: 656/1024 [MB] (14 MBps) Copying: 668/1024 [MB] (12 MBps) Copying: 681/1024 [MB] (12 MBps) Copying: 697/1024 [MB] (15 MBps) Copying: 710/1024 [MB] (13 MBps) Copying: 724/1024 [MB] (13 MBps) Copying: 735/1024 [MB] (11 MBps) Copying: 752/1024 [MB] (16 MBps) Copying: 775/1024 [MB] (22 MBps) Copying: 790/1024 [MB] (15 MBps) Copying: 808/1024 [MB] (17 MBps) Copying: 826/1024 [MB] (18 MBps) Copying: 846/1024 [MB] (19 MBps) Copying: 856/1024 [MB] (10 MBps) Copying: 872/1024 [MB] (16 MBps) Copying: 890/1024 [MB] (17 MBps) Copying: 909/1024 [MB] (18 MBps) Copying: 930/1024 [MB] (21 MBps) Copying: 954/1024 [MB] (23 MBps) Copying: 965/1024 [MB] (11 MBps) Copying: 980/1024 [MB] (14 MBps) Copying: 995/1024 [MB] (15 MBps) Copying: 1014/1024 [MB] (18 MBps) Copying: 1024/1024 [MB] (average 16 MBps)[2024-09-28 23:48:34.475722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.506 [2024-09-28 23:48:34.475817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:46.506 [2024-09-28 23:48:34.475837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:46.506 [2024-09-28 23:48:34.475848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.506 [2024-09-28 23:48:34.475877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:46.506 [2024-09-28 23:48:34.480041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.506 [2024-09-28 23:48:34.480092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:46.506 [2024-09-28 23:48:34.480107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:27:46.506 [2024-09-28 23:48:34.480118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.506 [2024-09-28 23:48:34.480421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.506 [2024-09-28 23:48:34.480435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:46.506 [2024-09-28 23:48:34.480447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:27:46.506 [2024-09-28 23:48:34.480457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.506 [2024-09-28 23:48:34.480495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.506 [2024-09-28 23:48:34.480525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:46.506 [2024-09-28 23:48:34.480536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:46.506 [2024-09-28 23:48:34.480547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.506 [2024-09-28 23:48:34.480618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.506 [2024-09-28 23:48:34.480632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:46.506 [2024-09-28 23:48:34.480643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:46.506 [2024-09-28 23:48:34.480654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.506 [2024-09-28 23:48:34.480671] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:46.506 [2024-09-28 23:48:34.480689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:46.506 [2024-09-28 23:48:34.480702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.480999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:46.506 [2024-09-28 23:48:34.481316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:46.507 [2024-09-28 23:48:34.481756] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:46.507 [2024-09-28 23:48:34.481782] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a13dbf6d-6874-41aa-9963-ba6a63aee207 00:27:46.507 [2024-09-28 23:48:34.481795] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:46.507 [2024-09-28 23:48:34.481806] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1568 00:27:46.507 [2024-09-28 23:48:34.481816] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1536 00:27:46.507 [2024-09-28 23:48:34.481828] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0208 00:27:46.507 [2024-09-28 23:48:34.481839] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:46.507 [2024-09-28 23:48:34.481850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:46.507 [2024-09-28 23:48:34.481862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:46.507 [2024-09-28 23:48:34.481874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:46.507 [2024-09-28 23:48:34.481884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:46.507 [2024-09-28 23:48:34.481894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.507 [2024-09-28 23:48:34.481906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:46.507 [2024-09-28 23:48:34.481918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:27:46.507 [2024-09-28 23:48:34.481929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.507 [2024-09-28 23:48:34.497324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.507 [2024-09-28 23:48:34.497377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:46.507 [2024-09-28 23:48:34.497390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.372 ms 00:27:46.507 [2024-09-28 23:48:34.497398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.507 [2024-09-28 23:48:34.497833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.507 [2024-09-28 23:48:34.497858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:46.507 [2024-09-28 23:48:34.497875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:27:46.507 [2024-09-28 23:48:34.497882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.507 [2024-09-28 23:48:34.529554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.507 [2024-09-28 23:48:34.529606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:46.507 [2024-09-28 23:48:34.529619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.507 [2024-09-28 23:48:34.529629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.507 [2024-09-28 23:48:34.529706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.507 [2024-09-28 23:48:34.529730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:46.507 [2024-09-28 23:48:34.529747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.507 [2024-09-28 23:48:34.529756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.507 [2024-09-28 23:48:34.529816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.507 [2024-09-28 23:48:34.529827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:46.507 [2024-09-28 23:48:34.529837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.507 [2024-09-28 23:48:34.529845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.507 [2024-09-28 23:48:34.529863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.507 [2024-09-28 23:48:34.529873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:46.507 [2024-09-28 23:48:34.529883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.507 [2024-09-28 23:48:34.529895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.507 [2024-09-28 23:48:34.615759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.507 [2024-09-28 23:48:34.615825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:46.507 [2024-09-28 23:48:34.615839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.507 [2024-09-28 23:48:34.615848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.769 [2024-09-28 23:48:34.686058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.769 [2024-09-28 23:48:34.686116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:46.769 [2024-09-28 23:48:34.686136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.769 [2024-09-28 23:48:34.686145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.769 [2024-09-28 23:48:34.686225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.769 [2024-09-28 23:48:34.686235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:46.769 [2024-09-28 23:48:34.686245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.769 [2024-09-28 23:48:34.686253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.769 [2024-09-28 23:48:34.686292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.769 [2024-09-28 23:48:34.686302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:46.769 [2024-09-28 23:48:34.686311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.769 [2024-09-28 23:48:34.686319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.769 [2024-09-28 23:48:34.686403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.769 [2024-09-28 23:48:34.686412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:46.769 [2024-09-28 23:48:34.686422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.769 [2024-09-28 23:48:34.686430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.769 [2024-09-28 23:48:34.686457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.769 [2024-09-28 23:48:34.686465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:46.769 [2024-09-28 23:48:34.686472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.769 [2024-09-28 23:48:34.686480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.769 [2024-09-28 23:48:34.686554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.769 [2024-09-28 23:48:34.686565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:46.769 [2024-09-28 23:48:34.686573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.769 [2024-09-28 23:48:34.686581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.769 [2024-09-28 23:48:34.686626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:46.769 [2024-09-28 23:48:34.686636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:46.769 [2024-09-28 23:48:34.686645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:46.769 [2024-09-28 23:48:34.686654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.769 [2024-09-28 23:48:34.686790] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 211.040 ms, result 0 00:27:47.712 00:27:47.713 00:27:47.713 23:48:35 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:49.628 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:49.628 23:48:37 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:49.628 23:48:37 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:27:49.628 23:48:37 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:49.890 Process with pid 78800 is not found 00:27:49.890 Remove shared memory files 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 78800 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78800 ']' 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78800 00:27:49.890 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78800) - No such process 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 78800 is not found' 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_band_md /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_l2p_l1 /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_l2p_l2 /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_l2p_l2_ctx /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_nvc_md /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_p2l_pool /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_sb /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_sb_shm /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_trim_bitmap /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_trim_log /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_trim_md /dev/hugepages/ftl_a13dbf6d-6874-41aa-9963-ba6a63aee207_vmap 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:27:49.890 ************************************ 00:27:49.890 END TEST ftl_restore_fast 00:27:49.890 ************************************ 00:27:49.890 00:27:49.890 real 4m25.234s 00:27:49.890 user 4m13.523s 00:27:49.890 sys 0m11.610s 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:49.890 23:48:37 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:49.890 Process with pid 72750 is not found 00:27:49.890 23:48:37 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:49.890 23:48:37 ftl -- ftl/ftl.sh@14 -- # killprocess 72750 00:27:49.890 23:48:37 ftl -- common/autotest_common.sh@950 -- # '[' -z 72750 ']' 00:27:49.890 23:48:37 ftl -- common/autotest_common.sh@954 -- # kill -0 72750 00:27:49.890 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72750) - No such process 00:27:49.890 23:48:37 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72750 is not found' 00:27:49.890 23:48:37 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:27:49.890 23:48:37 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81516 00:27:49.890 23:48:37 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81516 00:27:49.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.890 23:48:37 ftl -- common/autotest_common.sh@831 -- # '[' -z 81516 ']' 00:27:49.890 23:48:37 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.890 23:48:37 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:49.890 23:48:37 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.890 23:48:37 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:49.890 23:48:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:49.890 23:48:37 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:49.890 [2024-09-28 23:48:37.958274] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:27:49.890 [2024-09-28 23:48:37.958400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81516 ] 00:27:50.151 [2024-09-28 23:48:38.108586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.151 [2024-09-28 23:48:38.295837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.091 23:48:38 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.091 23:48:38 ftl -- common/autotest_common.sh@864 -- # return 0 00:27:51.091 23:48:38 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:51.091 nvme0n1 00:27:51.091 23:48:39 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:27:51.091 23:48:39 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:51.091 23:48:39 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:51.351 23:48:39 ftl -- ftl/common.sh@28 -- # stores=9fdfcbc4-b18c-4483-8420-7e85d685f6ea 00:27:51.351 23:48:39 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:27:51.351 23:48:39 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9fdfcbc4-b18c-4483-8420-7e85d685f6ea 00:27:51.643 23:48:39 ftl -- ftl/ftl.sh@23 -- # killprocess 81516 00:27:51.643 23:48:39 ftl -- common/autotest_common.sh@950 -- # '[' -z 81516 ']' 00:27:51.643 23:48:39 ftl -- common/autotest_common.sh@954 -- # kill -0 81516 00:27:51.643 23:48:39 ftl -- common/autotest_common.sh@955 -- # uname 00:27:51.643 23:48:39 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:51.643 23:48:39 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81516 00:27:51.643 killing process with pid 81516 00:27:51.643 23:48:39 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:51.643 23:48:39 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:51.643 23:48:39 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81516' 00:27:51.643 23:48:39 ftl -- common/autotest_common.sh@969 -- # kill 81516 00:27:51.643 23:48:39 ftl -- common/autotest_common.sh@974 -- # wait 81516 00:27:53.601 23:48:41 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:53.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:53.601 Waiting for block devices as requested 00:27:53.601 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:53.863 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:53.863 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:53.863 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:59.153 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:59.153 Remove shared memory files 00:27:59.153 23:48:47 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:27:59.153 23:48:47 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:59.153 23:48:47 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:27:59.153 23:48:47 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:27:59.153 23:48:47 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:27:59.153 23:48:47 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:59.153 23:48:47 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:27:59.153 ************************************ 00:27:59.153 END TEST ftl 00:27:59.153 ************************************ 00:27:59.153 00:27:59.153 real 12m54.679s 00:27:59.153 user 14m55.535s 00:27:59.153 sys 1m13.311s 00:27:59.153 23:48:47 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:59.153 23:48:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:59.153 23:48:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:59.153 23:48:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:59.153 23:48:47 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:27:59.153 23:48:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:59.153 23:48:47 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:27:59.153 23:48:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:59.153 23:48:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:59.153 23:48:47 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:27:59.153 23:48:47 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:27:59.153 23:48:47 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:27:59.153 23:48:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:59.153 23:48:47 -- common/autotest_common.sh@10 -- # set +x 00:27:59.153 23:48:47 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:27:59.153 23:48:47 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:59.153 23:48:47 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:59.153 23:48:47 -- common/autotest_common.sh@10 -- # set +x 00:28:00.536 INFO: APP EXITING 00:28:00.536 INFO: killing all VMs 00:28:00.536 INFO: killing vhost app 00:28:00.536 INFO: EXIT DONE 00:28:01.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:01.365 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:01.365 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:01.365 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:01.365 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:01.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:02.197 Cleaning 00:28:02.197 Removing: /var/run/dpdk/spdk0/config 00:28:02.197 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:02.197 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:02.197 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:02.197 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:02.197 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:02.197 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:02.197 Removing: /var/run/dpdk/spdk0 00:28:02.197 Removing: /var/run/dpdk/spdk_pid57328 00:28:02.197 Removing: /var/run/dpdk/spdk_pid57530 00:28:02.197 Removing: /var/run/dpdk/spdk_pid57737 00:28:02.197 Removing: /var/run/dpdk/spdk_pid57834 00:28:02.197 Removing: /var/run/dpdk/spdk_pid57869 00:28:02.197 Removing: /var/run/dpdk/spdk_pid57992 00:28:02.197 Removing: /var/run/dpdk/spdk_pid58010 00:28:02.197 Removing: /var/run/dpdk/spdk_pid58203 00:28:02.197 Removing: /var/run/dpdk/spdk_pid58302 00:28:02.197 Removing: /var/run/dpdk/spdk_pid58398 00:28:02.197 Removing: /var/run/dpdk/spdk_pid58503 00:28:02.197 Removing: /var/run/dpdk/spdk_pid58600 00:28:02.197 Removing: /var/run/dpdk/spdk_pid58640 00:28:02.197 Removing: /var/run/dpdk/spdk_pid58682 00:28:02.197 Removing: /var/run/dpdk/spdk_pid58752 00:28:02.197 Removing: /var/run/dpdk/spdk_pid58864 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59294 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59353 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59405 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59421 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59523 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59534 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59630 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59646 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59699 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59717 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59770 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59788 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59948 00:28:02.197 Removing: /var/run/dpdk/spdk_pid59985 00:28:02.197 Removing: /var/run/dpdk/spdk_pid60068 00:28:02.197 Removing: /var/run/dpdk/spdk_pid60251 00:28:02.197 Removing: /var/run/dpdk/spdk_pid60335 00:28:02.197 Removing: /var/run/dpdk/spdk_pid60372 00:28:02.197 Removing: /var/run/dpdk/spdk_pid60816 00:28:02.458 Removing: /var/run/dpdk/spdk_pid60916 00:28:02.458 Removing: /var/run/dpdk/spdk_pid61039 00:28:02.458 Removing: /var/run/dpdk/spdk_pid61092 00:28:02.458 Removing: /var/run/dpdk/spdk_pid61123 00:28:02.458 Removing: /var/run/dpdk/spdk_pid61207 00:28:02.458 Removing: /var/run/dpdk/spdk_pid61839 00:28:02.458 Removing: /var/run/dpdk/spdk_pid61881 00:28:02.458 Removing: /var/run/dpdk/spdk_pid62351 00:28:02.458 Removing: /var/run/dpdk/spdk_pid62449 00:28:02.458 Removing: /var/run/dpdk/spdk_pid62570 00:28:02.458 Removing: /var/run/dpdk/spdk_pid62623 00:28:02.458 Removing: /var/run/dpdk/spdk_pid62654 00:28:02.458 Removing: /var/run/dpdk/spdk_pid62685 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64521 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64653 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64662 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64674 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64720 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64724 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64736 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64781 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64785 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64797 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64842 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64846 00:28:02.458 Removing: /var/run/dpdk/spdk_pid64858 00:28:02.458 Removing: /var/run/dpdk/spdk_pid66225 00:28:02.458 Removing: /var/run/dpdk/spdk_pid66322 00:28:02.458 Removing: /var/run/dpdk/spdk_pid67723 00:28:02.458 Removing: /var/run/dpdk/spdk_pid69086 00:28:02.458 Removing: /var/run/dpdk/spdk_pid69180 00:28:02.458 Removing: /var/run/dpdk/spdk_pid69256 00:28:02.458 Removing: /var/run/dpdk/spdk_pid69338 00:28:02.458 Removing: /var/run/dpdk/spdk_pid69441 00:28:02.458 Removing: /var/run/dpdk/spdk_pid69515 00:28:02.458 Removing: /var/run/dpdk/spdk_pid69657 00:28:02.458 Removing: /var/run/dpdk/spdk_pid70021 00:28:02.458 Removing: /var/run/dpdk/spdk_pid70052 00:28:02.458 Removing: /var/run/dpdk/spdk_pid70517 00:28:02.458 Removing: /var/run/dpdk/spdk_pid70696 00:28:02.458 Removing: /var/run/dpdk/spdk_pid70797 00:28:02.458 Removing: /var/run/dpdk/spdk_pid70908 00:28:02.458 Removing: /var/run/dpdk/spdk_pid70963 00:28:02.458 Removing: /var/run/dpdk/spdk_pid70994 00:28:02.458 Removing: /var/run/dpdk/spdk_pid71284 00:28:02.458 Removing: /var/run/dpdk/spdk_pid71333 00:28:02.458 Removing: /var/run/dpdk/spdk_pid71406 00:28:02.458 Removing: /var/run/dpdk/spdk_pid71799 00:28:02.458 Removing: /var/run/dpdk/spdk_pid71945 00:28:02.458 Removing: /var/run/dpdk/spdk_pid72750 00:28:02.458 Removing: /var/run/dpdk/spdk_pid72882 00:28:02.458 Removing: /var/run/dpdk/spdk_pid73041 00:28:02.458 Removing: /var/run/dpdk/spdk_pid73138 00:28:02.458 Removing: /var/run/dpdk/spdk_pid73432 00:28:02.458 Removing: /var/run/dpdk/spdk_pid73668 00:28:02.458 Removing: /var/run/dpdk/spdk_pid74010 00:28:02.458 Removing: /var/run/dpdk/spdk_pid74192 00:28:02.458 Removing: /var/run/dpdk/spdk_pid74280 00:28:02.458 Removing: /var/run/dpdk/spdk_pid74327 00:28:02.458 Removing: /var/run/dpdk/spdk_pid74427 00:28:02.458 Removing: /var/run/dpdk/spdk_pid74453 00:28:02.458 Removing: /var/run/dpdk/spdk_pid74506 00:28:02.458 Removing: /var/run/dpdk/spdk_pid74726 00:28:02.458 Removing: /var/run/dpdk/spdk_pid74944 00:28:02.458 Removing: /var/run/dpdk/spdk_pid75263 00:28:02.458 Removing: /var/run/dpdk/spdk_pid75542 00:28:02.458 Removing: /var/run/dpdk/spdk_pid75806 00:28:02.458 Removing: /var/run/dpdk/spdk_pid76149 00:28:02.458 Removing: /var/run/dpdk/spdk_pid76280 00:28:02.458 Removing: /var/run/dpdk/spdk_pid76356 00:28:02.458 Removing: /var/run/dpdk/spdk_pid76736 00:28:02.458 Removing: /var/run/dpdk/spdk_pid76789 00:28:02.458 Removing: /var/run/dpdk/spdk_pid77103 00:28:02.458 Removing: /var/run/dpdk/spdk_pid77415 00:28:02.458 Removing: /var/run/dpdk/spdk_pid77780 00:28:02.458 Removing: /var/run/dpdk/spdk_pid77892 00:28:02.458 Removing: /var/run/dpdk/spdk_pid77941 00:28:02.458 Removing: /var/run/dpdk/spdk_pid78007 00:28:02.458 Removing: /var/run/dpdk/spdk_pid78057 00:28:02.458 Removing: /var/run/dpdk/spdk_pid78121 00:28:02.458 Removing: /var/run/dpdk/spdk_pid78310 00:28:02.458 Removing: /var/run/dpdk/spdk_pid78384 00:28:02.458 Removing: /var/run/dpdk/spdk_pid78451 00:28:02.458 Removing: /var/run/dpdk/spdk_pid78529 00:28:02.458 Removing: /var/run/dpdk/spdk_pid78565 00:28:02.458 Removing: /var/run/dpdk/spdk_pid78626 00:28:02.458 Removing: /var/run/dpdk/spdk_pid78800 00:28:02.458 Removing: /var/run/dpdk/spdk_pid79020 00:28:02.459 Removing: /var/run/dpdk/spdk_pid79604 00:28:02.459 Removing: /var/run/dpdk/spdk_pid80289 00:28:02.459 Removing: /var/run/dpdk/spdk_pid80794 00:28:02.459 Removing: /var/run/dpdk/spdk_pid81516 00:28:02.459 Clean 00:28:02.720 23:48:50 -- common/autotest_common.sh@1451 -- # return 0 00:28:02.720 23:48:50 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:28:02.720 23:48:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:02.720 23:48:50 -- common/autotest_common.sh@10 -- # set +x 00:28:02.720 23:48:50 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:28:02.720 23:48:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:02.720 23:48:50 -- common/autotest_common.sh@10 -- # set +x 00:28:02.720 23:48:50 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:02.720 23:48:50 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:02.720 23:48:50 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:02.720 23:48:50 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:28:02.720 23:48:50 -- spdk/autotest.sh@394 -- # hostname 00:28:02.720 23:48:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:02.980 geninfo: WARNING: invalid characters removed from testname! 00:28:29.558 23:49:16 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:32.119 23:49:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:34.670 23:49:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:36.584 23:49:24 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:39.121 23:49:26 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:41.032 23:49:28 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:42.943 23:49:30 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:42.943 23:49:30 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:28:42.943 23:49:30 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:28:42.943 23:49:30 -- common/autotest_common.sh@1681 -- $ lcov --version 00:28:42.943 23:49:30 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:28:42.943 23:49:30 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:28:42.943 23:49:30 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:28:42.943 23:49:30 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:28:42.943 23:49:30 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:42.943 23:49:30 -- scripts/common.sh@336 -- $ read -ra ver1 00:28:42.943 23:49:30 -- scripts/common.sh@337 -- $ IFS=.-: 00:28:42.943 23:49:30 -- scripts/common.sh@337 -- $ read -ra ver2 00:28:42.943 23:49:30 -- scripts/common.sh@338 -- $ local 'op=<' 00:28:42.943 23:49:30 -- scripts/common.sh@340 -- $ ver1_l=2 00:28:42.943 23:49:30 -- scripts/common.sh@341 -- $ ver2_l=1 00:28:42.943 23:49:30 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:28:42.943 23:49:30 -- scripts/common.sh@344 -- $ case "$op" in 00:28:42.943 23:49:30 -- scripts/common.sh@345 -- $ : 1 00:28:42.943 23:49:30 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:28:42.943 23:49:30 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.943 23:49:30 -- scripts/common.sh@365 -- $ decimal 1 00:28:42.943 23:49:30 -- scripts/common.sh@353 -- $ local d=1 00:28:42.943 23:49:30 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:42.943 23:49:30 -- scripts/common.sh@355 -- $ echo 1 00:28:42.943 23:49:30 -- scripts/common.sh@365 -- $ ver1[v]=1 00:28:42.943 23:49:30 -- scripts/common.sh@366 -- $ decimal 2 00:28:42.943 23:49:30 -- scripts/common.sh@353 -- $ local d=2 00:28:42.943 23:49:30 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:42.943 23:49:30 -- scripts/common.sh@355 -- $ echo 2 00:28:42.943 23:49:31 -- scripts/common.sh@366 -- $ ver2[v]=2 00:28:42.943 23:49:31 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:28:42.943 23:49:31 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:28:42.943 23:49:31 -- scripts/common.sh@368 -- $ return 0 00:28:42.943 23:49:31 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.943 23:49:31 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:28:42.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.943 --rc genhtml_branch_coverage=1 00:28:42.943 --rc genhtml_function_coverage=1 00:28:42.943 --rc genhtml_legend=1 00:28:42.943 --rc geninfo_all_blocks=1 00:28:42.943 --rc geninfo_unexecuted_blocks=1 00:28:42.943 00:28:42.943 ' 00:28:42.943 23:49:31 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:28:42.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.943 --rc genhtml_branch_coverage=1 00:28:42.943 --rc genhtml_function_coverage=1 00:28:42.943 --rc genhtml_legend=1 00:28:42.943 --rc geninfo_all_blocks=1 00:28:42.943 --rc geninfo_unexecuted_blocks=1 00:28:42.943 00:28:42.943 ' 00:28:42.943 23:49:31 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:28:42.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.943 --rc genhtml_branch_coverage=1 00:28:42.943 --rc genhtml_function_coverage=1 00:28:42.943 --rc genhtml_legend=1 00:28:42.943 --rc geninfo_all_blocks=1 00:28:42.943 --rc geninfo_unexecuted_blocks=1 00:28:42.943 00:28:42.943 ' 00:28:42.943 23:49:31 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:28:42.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.943 --rc genhtml_branch_coverage=1 00:28:42.943 --rc genhtml_function_coverage=1 00:28:42.943 --rc genhtml_legend=1 00:28:42.943 --rc geninfo_all_blocks=1 00:28:42.943 --rc geninfo_unexecuted_blocks=1 00:28:42.943 00:28:42.943 ' 00:28:42.943 23:49:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:42.943 23:49:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:28:42.943 23:49:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:42.943 23:49:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.943 23:49:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.943 23:49:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.943 23:49:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.943 23:49:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.943 23:49:31 -- paths/export.sh@5 -- $ export PATH 00:28:42.943 23:49:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.943 23:49:31 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:42.943 23:49:31 -- common/autobuild_common.sh@479 -- $ date +%s 00:28:42.943 23:49:31 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727567371.XXXXXX 00:28:42.943 23:49:31 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727567371.jdpvF3 00:28:42.943 23:49:31 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:28:42.943 23:49:31 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:28:42.943 23:49:31 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:42.943 23:49:31 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:42.943 23:49:31 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:42.943 23:49:31 -- common/autobuild_common.sh@495 -- $ get_config_params 00:28:42.943 23:49:31 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:28:42.943 23:49:31 -- common/autotest_common.sh@10 -- $ set +x 00:28:42.943 23:49:31 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:42.943 23:49:31 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:28:42.943 23:49:31 -- pm/common@17 -- $ local monitor 00:28:42.943 23:49:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:42.943 23:49:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:42.943 23:49:31 -- pm/common@25 -- $ sleep 1 00:28:42.943 23:49:31 -- pm/common@21 -- $ date +%s 00:28:42.943 23:49:31 -- pm/common@21 -- $ date +%s 00:28:42.943 23:49:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727567371 00:28:42.943 23:49:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727567371 00:28:42.943 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727567371_collect-vmstat.pm.log 00:28:42.943 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727567371_collect-cpu-load.pm.log 00:28:43.884 23:49:32 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:28:43.884 23:49:32 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:28:43.884 23:49:32 -- spdk/autopackage.sh@14 -- $ timing_finish 00:28:43.884 23:49:32 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:43.884 23:49:32 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:28:43.884 23:49:32 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:44.145 23:49:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:44.145 23:49:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:44.145 23:49:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:44.145 23:49:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.145 23:49:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:44.145 23:49:32 -- pm/common@44 -- $ pid=83222 00:28:44.145 23:49:32 -- pm/common@50 -- $ kill -TERM 83222 00:28:44.145 23:49:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.145 23:49:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:44.145 23:49:32 -- pm/common@44 -- $ pid=83223 00:28:44.145 23:49:32 -- pm/common@50 -- $ kill -TERM 83223 00:28:44.145 + [[ -n 5043 ]] 00:28:44.145 + sudo kill 5043 00:28:44.155 [Pipeline] } 00:28:44.171 [Pipeline] // timeout 00:28:44.176 [Pipeline] } 00:28:44.191 [Pipeline] // stage 00:28:44.196 [Pipeline] } 00:28:44.211 [Pipeline] // catchError 00:28:44.220 [Pipeline] stage 00:28:44.223 [Pipeline] { (Stop VM) 00:28:44.235 [Pipeline] sh 00:28:44.519 + vagrant halt 00:28:47.060 ==> default: Halting domain... 00:28:52.414 [Pipeline] sh 00:28:52.695 + vagrant destroy -f 00:28:55.234 ==> default: Removing domain... 00:28:55.821 [Pipeline] sh 00:28:56.105 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:56.116 [Pipeline] } 00:28:56.131 [Pipeline] // stage 00:28:56.137 [Pipeline] } 00:28:56.151 [Pipeline] // dir 00:28:56.156 [Pipeline] } 00:28:56.171 [Pipeline] // wrap 00:28:56.176 [Pipeline] } 00:28:56.189 [Pipeline] // catchError 00:28:56.198 [Pipeline] stage 00:28:56.200 [Pipeline] { (Epilogue) 00:28:56.212 [Pipeline] sh 00:28:56.498 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:01.782 [Pipeline] catchError 00:29:01.784 [Pipeline] { 00:29:01.797 [Pipeline] sh 00:29:02.085 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:02.085 Artifacts sizes are good 00:29:02.096 [Pipeline] } 00:29:02.111 [Pipeline] // catchError 00:29:02.123 [Pipeline] archiveArtifacts 00:29:02.130 Archiving artifacts 00:29:02.249 [Pipeline] cleanWs 00:29:02.259 [WS-CLEANUP] Deleting project workspace... 00:29:02.260 [WS-CLEANUP] Deferred wipeout is used... 00:29:02.267 [WS-CLEANUP] done 00:29:02.269 [Pipeline] } 00:29:02.284 [Pipeline] // stage 00:29:02.288 [Pipeline] } 00:29:02.302 [Pipeline] // node 00:29:02.307 [Pipeline] End of Pipeline 00:29:02.348 Finished: SUCCESS