00:00:00.002 Started by upstream project "autotest-nightly" build number 4125 00:00:00.002 originally caused by: 00:00:00.003 Started by upstream project "nightly-trigger" build number 3487 00:00:00.003 originally caused by: 00:00:00.003 Started by timer 00:00:00.003 Started by timer 00:00:00.135 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.136 The recommended git tool is: git 00:00:00.136 using credential 00000000-0000-0000-0000-000000000002 00:00:00.138 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.172 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.304 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.304 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.373 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.383 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.394 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:10.394 > git config core.sparsecheckout # timeout=10 00:00:10.403 > git read-tree -mu HEAD # timeout=10 00:00:10.418 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:10.434 Commit message: "kid: add issue 3541" 00:00:10.434 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:10.524 [Pipeline] Start of Pipeline 00:00:10.536 [Pipeline] library 00:00:10.538 Loading library shm_lib@master 00:00:10.539 Library shm_lib@master is cached. Copying from home. 00:00:10.550 [Pipeline] node 00:00:10.561 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:10.563 [Pipeline] { 00:00:10.571 [Pipeline] catchError 00:00:10.572 [Pipeline] { 00:00:10.585 [Pipeline] wrap 00:00:10.593 [Pipeline] { 00:00:10.600 [Pipeline] stage 00:00:10.602 [Pipeline] { (Prologue) 00:00:10.618 [Pipeline] echo 00:00:10.619 Node: VM-host-SM38 00:00:10.625 [Pipeline] cleanWs 00:00:10.633 [WS-CLEANUP] Deleting project workspace... 00:00:10.633 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.640 [WS-CLEANUP] done 00:00:10.813 [Pipeline] setCustomBuildProperty 00:00:10.896 [Pipeline] httpRequest 00:00:11.475 [Pipeline] echo 00:00:11.477 Sorcerer 10.211.164.101 is alive 00:00:11.484 [Pipeline] retry 00:00:11.485 [Pipeline] { 00:00:11.498 [Pipeline] httpRequest 00:00:11.502 HttpMethod: GET 00:00:11.502 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:11.503 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:11.527 Response Code: HTTP/1.1 200 OK 00:00:11.527 Success: Status code 200 is in the accepted range: 200,404 00:00:11.528 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:15.732 [Pipeline] } 00:00:15.749 [Pipeline] // retry 00:00:15.756 [Pipeline] sh 00:00:16.043 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:16.060 [Pipeline] httpRequest 00:00:16.666 [Pipeline] echo 00:00:16.668 Sorcerer 10.211.164.101 is alive 00:00:16.678 [Pipeline] retry 00:00:16.681 [Pipeline] { 00:00:16.700 [Pipeline] httpRequest 00:00:16.706 HttpMethod: GET 00:00:16.706 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:16.707 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:16.728 Response Code: HTTP/1.1 200 OK 00:00:16.729 Success: Status code 200 is in the accepted range: 200,404 00:00:16.729 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:35.669 [Pipeline] } 00:00:35.690 [Pipeline] // retry 00:00:35.698 [Pipeline] sh 00:00:35.984 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:38.537 [Pipeline] sh 00:00:38.822 + git -C spdk log --oneline -n5 00:00:38.822 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:38.822 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:38.822 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:38.822 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:38.822 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:38.843 [Pipeline] writeFile 00:00:38.859 [Pipeline] sh 00:00:39.149 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:39.162 [Pipeline] sh 00:00:39.447 + cat autorun-spdk.conf 00:00:39.447 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.447 SPDK_TEST_NVME=1 00:00:39.447 SPDK_TEST_FTL=1 00:00:39.447 SPDK_TEST_ISAL=1 00:00:39.447 SPDK_RUN_ASAN=1 00:00:39.447 SPDK_RUN_UBSAN=1 00:00:39.447 SPDK_TEST_XNVME=1 00:00:39.447 SPDK_TEST_NVME_FDP=1 00:00:39.447 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:39.455 RUN_NIGHTLY=1 00:00:39.457 [Pipeline] } 00:00:39.471 [Pipeline] // stage 00:00:39.485 [Pipeline] stage 00:00:39.488 [Pipeline] { (Run VM) 00:00:39.501 [Pipeline] sh 00:00:39.786 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:39.786 + echo 'Start stage prepare_nvme.sh' 00:00:39.786 Start stage prepare_nvme.sh 00:00:39.786 + [[ -n 3 ]] 00:00:39.786 + disk_prefix=ex3 00:00:39.786 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:39.786 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:39.786 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:39.786 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.786 ++ SPDK_TEST_NVME=1 00:00:39.786 ++ SPDK_TEST_FTL=1 00:00:39.786 ++ SPDK_TEST_ISAL=1 00:00:39.786 ++ SPDK_RUN_ASAN=1 00:00:39.786 ++ SPDK_RUN_UBSAN=1 00:00:39.786 ++ SPDK_TEST_XNVME=1 00:00:39.786 ++ SPDK_TEST_NVME_FDP=1 00:00:39.786 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:39.786 ++ RUN_NIGHTLY=1 00:00:39.786 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:39.786 + nvme_files=() 00:00:39.786 + declare -A nvme_files 00:00:39.786 + backend_dir=/var/lib/libvirt/images/backends 00:00:39.786 + nvme_files['nvme.img']=5G 00:00:39.786 + nvme_files['nvme-cmb.img']=5G 00:00:39.786 + nvme_files['nvme-multi0.img']=4G 00:00:39.786 + nvme_files['nvme-multi1.img']=4G 00:00:39.786 + nvme_files['nvme-multi2.img']=4G 00:00:39.786 + nvme_files['nvme-openstack.img']=8G 00:00:39.786 + nvme_files['nvme-zns.img']=5G 00:00:39.786 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:39.786 + (( SPDK_TEST_FTL == 1 )) 00:00:39.786 + nvme_files["nvme-ftl.img"]=6G 00:00:39.786 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:39.786 + nvme_files["nvme-fdp.img"]=1G 00:00:39.786 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:39.786 + for nvme in "${!nvme_files[@]}" 00:00:39.786 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:39.786 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:39.786 + for nvme in "${!nvme_files[@]}" 00:00:39.786 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:00:40.049 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:40.049 + for nvme in "${!nvme_files[@]}" 00:00:40.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:40.049 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.049 + for nvme in "${!nvme_files[@]}" 00:00:40.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:40.049 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:40.049 + for nvme in "${!nvme_files[@]}" 00:00:40.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:40.049 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.049 + for nvme in "${!nvme_files[@]}" 00:00:40.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:40.049 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.049 + for nvme in "${!nvme_files[@]}" 00:00:40.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:40.049 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.049 + for nvme in "${!nvme_files[@]}" 00:00:40.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:00:40.311 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:40.311 + for nvme in "${!nvme_files[@]}" 00:00:40.311 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:40.311 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.311 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:40.311 + echo 'End stage prepare_nvme.sh' 00:00:40.311 End stage prepare_nvme.sh 00:00:40.323 [Pipeline] sh 00:00:40.605 + DISTRO=fedora39 00:00:40.606 + CPUS=10 00:00:40.606 + RAM=12288 00:00:40.606 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:40.606 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:40.606 00:00:40.606 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:40.606 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:40.606 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:40.606 HELP=0 00:00:40.606 DRY_RUN=0 00:00:40.606 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:00:40.606 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:40.606 NVME_AUTO_CREATE=0 00:00:40.606 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:00:40.606 NVME_CMB=,,,, 00:00:40.606 NVME_PMR=,,,, 00:00:40.606 NVME_ZNS=,,,, 00:00:40.606 NVME_MS=true,,,, 00:00:40.606 NVME_FDP=,,,on, 00:00:40.606 SPDK_VAGRANT_DISTRO=fedora39 00:00:40.606 SPDK_VAGRANT_VMCPU=10 00:00:40.606 SPDK_VAGRANT_VMRAM=12288 00:00:40.606 SPDK_VAGRANT_PROVIDER=libvirt 00:00:40.606 SPDK_VAGRANT_HTTP_PROXY= 00:00:40.606 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:40.606 SPDK_OPENSTACK_NETWORK=0 00:00:40.606 VAGRANT_PACKAGE_BOX=0 00:00:40.606 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:40.606 FORCE_DISTRO=true 00:00:40.606 VAGRANT_BOX_VERSION= 00:00:40.606 EXTRA_VAGRANTFILES= 00:00:40.606 NIC_MODEL=e1000 00:00:40.606 00:00:40.606 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:40.606 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:43.151 Bringing machine 'default' up with 'libvirt' provider... 00:00:43.412 ==> default: Creating image (snapshot of base box volume). 00:00:43.673 ==> default: Creating domain with the following settings... 00:00:43.673 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727486019_19b97c72ea67e431f3f1 00:00:43.673 ==> default: -- Domain type: kvm 00:00:43.673 ==> default: -- Cpus: 10 00:00:43.673 ==> default: -- Feature: acpi 00:00:43.673 ==> default: -- Feature: apic 00:00:43.673 ==> default: -- Feature: pae 00:00:43.673 ==> default: -- Memory: 12288M 00:00:43.673 ==> default: -- Memory Backing: hugepages: 00:00:43.673 ==> default: -- Management MAC: 00:00:43.673 ==> default: -- Loader: 00:00:43.673 ==> default: -- Nvram: 00:00:43.673 ==> default: -- Base box: spdk/fedora39 00:00:43.673 ==> default: -- Storage pool: default 00:00:43.673 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727486019_19b97c72ea67e431f3f1.img (20G) 00:00:43.673 ==> default: -- Volume Cache: default 00:00:43.673 ==> default: -- Kernel: 00:00:43.673 ==> default: -- Initrd: 00:00:43.673 ==> default: -- Graphics Type: vnc 00:00:43.673 ==> default: -- Graphics Port: -1 00:00:43.673 ==> default: -- Graphics IP: 127.0.0.1 00:00:43.673 ==> default: -- Graphics Password: Not defined 00:00:43.673 ==> default: -- Video Type: cirrus 00:00:43.673 ==> default: -- Video VRAM: 9216 00:00:43.673 ==> default: -- Sound Type: 00:00:43.673 ==> default: -- Keymap: en-us 00:00:43.673 ==> default: -- TPM Path: 00:00:43.673 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:43.673 ==> default: -- Command line args: 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:43.673 ==> default: -> value=-drive, 00:00:43.673 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:43.673 ==> default: -> value=-drive, 00:00:43.673 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:43.673 ==> default: -> value=-drive, 00:00:43.673 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.673 ==> default: -> value=-drive, 00:00:43.673 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.673 ==> default: -> value=-drive, 00:00:43.673 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:43.673 ==> default: -> value=-drive, 00:00:43.673 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:43.673 ==> default: -> value=-device, 00:00:43.673 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.673 ==> default: Creating shared folders metadata... 00:00:43.673 ==> default: Starting domain. 00:00:46.222 ==> default: Waiting for domain to get an IP address... 00:01:04.345 ==> default: Waiting for SSH to become available... 00:01:04.345 ==> default: Configuring and enabling network interfaces... 00:01:08.557 default: SSH address: 192.168.121.29:22 00:01:08.557 default: SSH username: vagrant 00:01:08.557 default: SSH auth method: private key 00:01:10.470 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:18.613 ==> default: Mounting SSHFS shared folder... 00:01:20.529 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:20.529 ==> default: Checking Mount.. 00:01:21.474 ==> default: Folder Successfully Mounted! 00:01:21.474 00:01:21.474 SUCCESS! 00:01:21.474 00:01:21.474 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:21.474 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:21.474 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:21.474 00:01:21.483 [Pipeline] } 00:01:21.496 [Pipeline] // stage 00:01:21.504 [Pipeline] dir 00:01:21.504 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:21.505 [Pipeline] { 00:01:21.517 [Pipeline] catchError 00:01:21.518 [Pipeline] { 00:01:21.529 [Pipeline] sh 00:01:21.813 + vagrant ssh-config --host vagrant 00:01:21.813 + sed -ne '/^Host/,$p' 00:01:21.813 + tee ssh_conf 00:01:24.432 Host vagrant 00:01:24.432 HostName 192.168.121.29 00:01:24.432 User vagrant 00:01:24.432 Port 22 00:01:24.432 UserKnownHostsFile /dev/null 00:01:24.432 StrictHostKeyChecking no 00:01:24.432 PasswordAuthentication no 00:01:24.432 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:24.432 IdentitiesOnly yes 00:01:24.432 LogLevel FATAL 00:01:24.432 ForwardAgent yes 00:01:24.432 ForwardX11 yes 00:01:24.432 00:01:24.447 [Pipeline] withEnv 00:01:24.450 [Pipeline] { 00:01:24.465 [Pipeline] sh 00:01:24.747 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:24.747 source /etc/os-release 00:01:24.747 [[ -e /image.version ]] && img=$(< /image.version) 00:01:24.747 # Minimal, systemd-like check. 00:01:24.747 if [[ -e /.dockerenv ]]; then 00:01:24.747 # Clear garbage from the node'\''s name: 00:01:24.747 # agt-er_autotest_547-896 -> autotest_547-896 00:01:24.747 # $HOSTNAME is the actual container id 00:01:24.748 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:24.748 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:24.748 # We can assume this is a mount from a host where container is running, 00:01:24.748 # so fetch its hostname to easily identify the target swarm worker. 00:01:24.748 container="$(< /etc/hostname) ($agent)" 00:01:24.748 else 00:01:24.748 # Fallback 00:01:24.748 container=$agent 00:01:24.748 fi 00:01:24.748 fi 00:01:24.748 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:24.748 ' 00:01:25.021 [Pipeline] } 00:01:25.036 [Pipeline] // withEnv 00:01:25.045 [Pipeline] setCustomBuildProperty 00:01:25.059 [Pipeline] stage 00:01:25.061 [Pipeline] { (Tests) 00:01:25.077 [Pipeline] sh 00:01:25.363 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:25.639 [Pipeline] sh 00:01:25.924 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:26.201 [Pipeline] timeout 00:01:26.202 Timeout set to expire in 50 min 00:01:26.203 [Pipeline] { 00:01:26.216 [Pipeline] sh 00:01:26.501 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:27.075 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:27.088 [Pipeline] sh 00:01:27.371 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:27.648 [Pipeline] sh 00:01:27.957 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:28.243 [Pipeline] sh 00:01:28.522 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:28.522 ++ readlink -f spdk_repo 00:01:28.522 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:28.522 + [[ -n /home/vagrant/spdk_repo ]] 00:01:28.522 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:28.522 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:28.522 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:28.522 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:28.522 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:28.522 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:28.522 + cd /home/vagrant/spdk_repo 00:01:28.522 + source /etc/os-release 00:01:28.522 ++ NAME='Fedora Linux' 00:01:28.522 ++ VERSION='39 (Cloud Edition)' 00:01:28.522 ++ ID=fedora 00:01:28.522 ++ VERSION_ID=39 00:01:28.522 ++ VERSION_CODENAME= 00:01:28.522 ++ PLATFORM_ID=platform:f39 00:01:28.522 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:28.522 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.522 ++ LOGO=fedora-logo-icon 00:01:28.522 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:28.522 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.522 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:28.522 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.522 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.522 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.522 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:28.522 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.522 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:28.522 ++ SUPPORT_END=2024-11-12 00:01:28.522 ++ VARIANT='Cloud Edition' 00:01:28.522 ++ VARIANT_ID=cloud 00:01:28.522 + uname -a 00:01:28.780 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:28.780 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:29.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:29.303 Hugepages 00:01:29.303 node hugesize free / total 00:01:29.303 node0 1048576kB 0 / 0 00:01:29.303 node0 2048kB 0 / 0 00:01:29.303 00:01:29.303 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:29.303 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:29.303 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:29.303 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:29.303 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:01:29.303 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:29.303 + rm -f /tmp/spdk-ld-path 00:01:29.303 + source autorun-spdk.conf 00:01:29.303 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.303 ++ SPDK_TEST_NVME=1 00:01:29.303 ++ SPDK_TEST_FTL=1 00:01:29.303 ++ SPDK_TEST_ISAL=1 00:01:29.303 ++ SPDK_RUN_ASAN=1 00:01:29.303 ++ SPDK_RUN_UBSAN=1 00:01:29.303 ++ SPDK_TEST_XNVME=1 00:01:29.303 ++ SPDK_TEST_NVME_FDP=1 00:01:29.303 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.303 ++ RUN_NIGHTLY=1 00:01:29.303 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:29.303 + [[ -n '' ]] 00:01:29.303 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:29.303 + for M in /var/spdk/build-*-manifest.txt 00:01:29.303 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:29.303 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.303 + for M in /var/spdk/build-*-manifest.txt 00:01:29.303 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:29.303 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.303 + for M in /var/spdk/build-*-manifest.txt 00:01:29.303 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:29.303 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.303 ++ uname 00:01:29.303 + [[ Linux == \L\i\n\u\x ]] 00:01:29.303 + sudo dmesg -T 00:01:29.564 + sudo dmesg --clear 00:01:29.564 + dmesg_pid=5030 00:01:29.564 + [[ Fedora Linux == FreeBSD ]] 00:01:29.564 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.564 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.564 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:29.564 + [[ -x /usr/src/fio-static/fio ]] 00:01:29.564 + sudo dmesg -Tw 00:01:29.564 + export FIO_BIN=/usr/src/fio-static/fio 00:01:29.564 + FIO_BIN=/usr/src/fio-static/fio 00:01:29.564 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:29.564 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.564 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.564 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.564 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.564 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.564 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.564 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.564 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:29.564 Test configuration: 00:01:29.564 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.564 SPDK_TEST_NVME=1 00:01:29.564 SPDK_TEST_FTL=1 00:01:29.564 SPDK_TEST_ISAL=1 00:01:29.564 SPDK_RUN_ASAN=1 00:01:29.564 SPDK_RUN_UBSAN=1 00:01:29.564 SPDK_TEST_XNVME=1 00:01:29.564 SPDK_TEST_NVME_FDP=1 00:01:29.564 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.564 RUN_NIGHTLY=1 01:14:25 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:29.564 01:14:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:29.564 01:14:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:29.564 01:14:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:29.564 01:14:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:29.564 01:14:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:29.564 01:14:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.564 01:14:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.564 01:14:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.564 01:14:25 -- paths/export.sh@5 -- $ export PATH 00:01:29.564 01:14:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.564 01:14:25 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:29.564 01:14:25 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:29.564 01:14:25 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727486065.XXXXXX 00:01:29.564 01:14:25 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727486065.ZjtIky 00:01:29.564 01:14:25 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:29.564 01:14:25 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:01:29.564 01:14:25 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:29.564 01:14:25 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:29.564 01:14:25 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:29.564 01:14:25 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:29.564 01:14:25 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:29.564 01:14:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.564 01:14:25 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:29.564 01:14:25 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:29.564 01:14:25 -- pm/common@17 -- $ local monitor 00:01:29.564 01:14:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.564 01:14:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.564 01:14:25 -- pm/common@25 -- $ sleep 1 00:01:29.564 01:14:25 -- pm/common@21 -- $ date +%s 00:01:29.564 01:14:25 -- pm/common@21 -- $ date +%s 00:01:29.564 01:14:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727486065 00:01:29.564 01:14:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727486065 00:01:29.564 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727486065_collect-cpu-load.pm.log 00:01:29.565 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727486065_collect-vmstat.pm.log 00:01:30.563 01:14:26 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:30.563 01:14:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.563 01:14:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.563 01:14:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:30.563 01:14:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.563 Sat Sep 28 01:14:26 AM UTC 2024 00:01:30.563 01:14:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.563 v25.01-pre-17-g09cc66129 00:01:30.563 01:14:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:30.563 01:14:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:30.563 01:14:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:30.563 01:14:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:30.563 01:14:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.563 ************************************ 00:01:30.563 START TEST asan 00:01:30.563 ************************************ 00:01:30.563 using asan 00:01:30.563 01:14:26 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:30.563 00:01:30.563 real 0m0.000s 00:01:30.563 user 0m0.000s 00:01:30.563 sys 0m0.000s 00:01:30.563 ************************************ 00:01:30.563 END TEST asan 00:01:30.563 ************************************ 00:01:30.563 01:14:26 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:30.563 01:14:26 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.825 01:14:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:30.825 01:14:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:30.825 01:14:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:30.825 01:14:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:30.825 01:14:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.825 ************************************ 00:01:30.825 START TEST ubsan 00:01:30.825 ************************************ 00:01:30.825 using ubsan 00:01:30.825 01:14:26 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:30.825 00:01:30.825 real 0m0.000s 00:01:30.825 user 0m0.000s 00:01:30.825 sys 0m0.000s 00:01:30.825 01:14:26 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:30.825 01:14:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.825 ************************************ 00:01:30.825 END TEST ubsan 00:01:30.825 ************************************ 00:01:30.825 01:14:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:30.825 01:14:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:30.825 01:14:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:30.825 01:14:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:30.825 01:14:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:30.825 01:14:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:30.825 01:14:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:30.825 01:14:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:30.825 01:14:26 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:30.825 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:30.825 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:31.398 Using 'verbs' RDMA provider 00:01:42.341 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:54.613 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:54.613 Creating mk/config.mk...done. 00:01:54.613 Creating mk/cc.flags.mk...done. 00:01:54.613 Type 'make' to build. 00:01:54.613 01:14:49 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:54.613 01:14:49 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:54.613 01:14:49 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.613 01:14:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.613 ************************************ 00:01:54.613 START TEST make 00:01:54.613 ************************************ 00:01:54.613 01:14:49 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:54.613 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:54.613 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:54.613 meson setup builddir \ 00:01:54.613 -Dwith-libaio=enabled \ 00:01:54.613 -Dwith-liburing=enabled \ 00:01:54.613 -Dwith-libvfn=disabled \ 00:01:54.613 -Dwith-spdk=false && \ 00:01:54.613 meson compile -C builddir && \ 00:01:54.613 cd -) 00:01:54.613 make[1]: Nothing to be done for 'all'. 00:01:55.988 The Meson build system 00:01:55.988 Version: 1.5.0 00:01:55.988 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:55.988 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:55.988 Build type: native build 00:01:55.988 Project name: xnvme 00:01:55.988 Project version: 0.7.3 00:01:55.988 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:55.988 C linker for the host machine: cc ld.bfd 2.40-14 00:01:55.988 Host machine cpu family: x86_64 00:01:55.988 Host machine cpu: x86_64 00:01:55.988 Message: host_machine.system: linux 00:01:55.988 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:55.988 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:55.988 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:55.988 Run-time dependency threads found: YES 00:01:55.988 Has header "setupapi.h" : NO 00:01:55.988 Has header "linux/blkzoned.h" : YES 00:01:55.988 Has header "linux/blkzoned.h" : YES (cached) 00:01:55.988 Has header "libaio.h" : YES 00:01:55.988 Library aio found: YES 00:01:55.988 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:55.988 Run-time dependency liburing found: YES 2.2 00:01:55.988 Dependency libvfn skipped: feature with-libvfn disabled 00:01:55.988 Run-time dependency appleframeworks found: NO (tried framework) 00:01:55.988 Run-time dependency appleframeworks found: NO (tried framework) 00:01:55.988 Configuring xnvme_config.h using configuration 00:01:55.988 Configuring xnvme.spec using configuration 00:01:55.988 Run-time dependency bash-completion found: YES 2.11 00:01:55.988 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:55.988 Program cp found: YES (/usr/bin/cp) 00:01:55.988 Has header "winsock2.h" : NO 00:01:55.988 Has header "dbghelp.h" : NO 00:01:55.988 Library rpcrt4 found: NO 00:01:55.988 Library rt found: YES 00:01:55.988 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:55.988 Found CMake: /usr/bin/cmake (3.27.7) 00:01:55.988 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:01:55.988 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:01:55.988 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:01:55.988 Build targets in project: 32 00:01:55.988 00:01:55.988 xnvme 0.7.3 00:01:55.988 00:01:55.988 User defined options 00:01:55.988 with-libaio : enabled 00:01:55.988 with-liburing: enabled 00:01:55.988 with-libvfn : disabled 00:01:55.988 with-spdk : false 00:01:55.988 00:01:55.988 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.246 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:56.246 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:01:56.504 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:01:56.504 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:01:56.504 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:01:56.504 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:01:56.504 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:01:56.504 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:01:56.504 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:01:56.504 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:01:56.504 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:01:56.504 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:01:56.504 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:01:56.504 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:01:56.504 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:01:56.504 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:01:56.504 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:01:56.504 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:01:56.504 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:01:56.504 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:01:56.504 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:01:56.504 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:01:56.504 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:01:56.504 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:01:56.504 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:01:56.762 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:01:56.762 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:01:56.762 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:01:56.762 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:01:56.762 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:01:56.762 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:01:56.762 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:01:56.762 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:01:56.762 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:01:56.762 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:01:56.762 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:01:56.762 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:01:56.762 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:01:56.762 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:01:56.762 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:01:56.762 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:01:56.762 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:01:56.762 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:01:56.762 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:01:56.762 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:01:56.762 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:01:56.762 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:01:56.762 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:01:56.762 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:01:56.762 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:01:56.762 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:01:56.762 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:01:56.762 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:01:56.762 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:01:56.762 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:01:56.762 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:01:56.762 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:01:56.762 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:01:56.762 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:01:56.762 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:01:57.020 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:01:57.020 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:01:57.020 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:01:57.020 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:01:57.020 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:01:57.020 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:01:57.020 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:01:57.020 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:01:57.020 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:01:57.020 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:01:57.020 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:01:57.020 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:01:57.020 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:01:57.020 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:01:57.020 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:01:57.020 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:01:57.020 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:01:57.020 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:01:57.020 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:01:57.020 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:01:57.020 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:01:57.278 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:01:57.278 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:01:57.278 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:01:57.278 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:01:57.278 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:01:57.278 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:01:57.278 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:01:57.278 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:01:57.278 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:01:57.278 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:01:57.278 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:01:57.278 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:01:57.278 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:01:57.278 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:01:57.278 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:01:57.278 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:01:57.278 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:01:57.278 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:01:57.278 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:01:57.278 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:01:57.278 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:01:57.278 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:01:57.278 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:01:57.278 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:01:57.278 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:01:57.278 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:01:57.278 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:01:57.278 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:01:57.536 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:01:57.536 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:01:57.536 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:01:57.536 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:01:57.536 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:01:57.536 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:01:57.536 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:01:57.536 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:01:57.536 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:01:57.536 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:01:57.536 [119/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:01:57.536 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:01:57.536 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:01:57.536 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:01:57.536 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:01:57.536 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:01:57.536 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:01:57.536 [126/203] Linking target lib/libxnvme.so 00:01:57.536 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:01:57.536 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:01:57.536 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:01:57.536 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:01:57.536 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:01:57.536 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:01:57.536 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:01:57.536 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:01:57.536 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:01:57.536 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:01:57.536 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:01:57.536 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:01:57.536 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:01:57.536 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:01:57.536 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:01:57.795 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:01:57.795 [143/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:01:57.795 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:01:57.795 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:01:57.795 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:01:57.795 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:01:57.795 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:01:57.795 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:01:57.795 [150/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:01:57.795 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:01:57.795 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:01:57.795 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:01:57.795 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:01:57.795 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:01:57.795 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:01:57.795 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:01:57.795 [158/203] Compiling C object tools/kvs.p/kvs.c.o 00:01:57.795 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:01:57.795 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:01:57.795 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:01:57.795 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:01:58.053 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:01:58.053 [164/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:01:58.053 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:01:58.053 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:01:58.053 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:01:58.053 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:01:58.053 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:01:58.053 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:01:58.053 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:01:58.053 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:01:58.053 [173/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:01:58.053 [174/203] Linking static target lib/libxnvme.a 00:01:58.053 [175/203] Linking target tests/xnvme_tests_lblk 00:01:58.053 [176/203] Linking target tests/xnvme_tests_buf 00:01:58.053 [177/203] Linking target tests/xnvme_tests_async_intf 00:01:58.053 [178/203] Linking target tests/xnvme_tests_cli 00:01:58.310 [179/203] Linking target tests/xnvme_tests_ioworker 00:01:58.310 [180/203] Linking target tests/xnvme_tests_enum 00:01:58.310 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:01:58.310 [182/203] Linking target tests/xnvme_tests_scc 00:01:58.310 [183/203] Linking target tests/xnvme_tests_znd_append 00:01:58.310 [184/203] Linking target tests/xnvme_tests_kvs 00:01:58.310 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:01:58.310 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:01:58.310 [187/203] Linking target tools/lblk 00:01:58.310 [188/203] Linking target tools/xdd 00:01:58.310 [189/203] Linking target tests/xnvme_tests_map 00:01:58.310 [190/203] Linking target tests/xnvme_tests_znd_state 00:01:58.310 [191/203] Linking target examples/xnvme_enum 00:01:58.310 [192/203] Linking target tools/xnvme_file 00:01:58.310 [193/203] Linking target tools/zoned 00:01:58.310 [194/203] Linking target tests/xnvme_tests_znd_zrwa 00:01:58.310 [195/203] Linking target examples/xnvme_dev 00:01:58.310 [196/203] Linking target tools/xnvme 00:01:58.310 [197/203] Linking target examples/xnvme_hello 00:01:58.310 [198/203] Linking target tools/kvs 00:01:58.310 [199/203] Linking target examples/xnvme_io_async 00:01:58.310 [200/203] Linking target examples/xnvme_single_sync 00:01:58.310 [201/203] Linking target examples/zoned_io_async 00:01:58.310 [202/203] Linking target examples/xnvme_single_async 00:01:58.310 [203/203] Linking target examples/zoned_io_sync 00:01:58.310 INFO: autodetecting backend as ninja 00:01:58.310 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:58.310 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:03.588 The Meson build system 00:02:03.588 Version: 1.5.0 00:02:03.588 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:03.588 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:03.588 Build type: native build 00:02:03.588 Program cat found: YES (/usr/bin/cat) 00:02:03.588 Project name: DPDK 00:02:03.588 Project version: 24.03.0 00:02:03.588 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:03.588 C linker for the host machine: cc ld.bfd 2.40-14 00:02:03.588 Host machine cpu family: x86_64 00:02:03.588 Host machine cpu: x86_64 00:02:03.588 Message: ## Building in Developer Mode ## 00:02:03.588 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:03.588 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:03.588 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:03.588 Program python3 found: YES (/usr/bin/python3) 00:02:03.588 Program cat found: YES (/usr/bin/cat) 00:02:03.588 Compiler for C supports arguments -march=native: YES 00:02:03.588 Checking for size of "void *" : 8 00:02:03.588 Checking for size of "void *" : 8 (cached) 00:02:03.588 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:03.588 Library m found: YES 00:02:03.588 Library numa found: YES 00:02:03.588 Has header "numaif.h" : YES 00:02:03.588 Library fdt found: NO 00:02:03.588 Library execinfo found: NO 00:02:03.588 Has header "execinfo.h" : YES 00:02:03.588 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:03.588 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:03.588 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:03.588 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:03.588 Run-time dependency openssl found: YES 3.1.1 00:02:03.588 Run-time dependency libpcap found: YES 1.10.4 00:02:03.588 Has header "pcap.h" with dependency libpcap: YES 00:02:03.588 Compiler for C supports arguments -Wcast-qual: YES 00:02:03.588 Compiler for C supports arguments -Wdeprecated: YES 00:02:03.588 Compiler for C supports arguments -Wformat: YES 00:02:03.588 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:03.588 Compiler for C supports arguments -Wformat-security: NO 00:02:03.588 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.589 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:03.589 Compiler for C supports arguments -Wnested-externs: YES 00:02:03.589 Compiler for C supports arguments -Wold-style-definition: YES 00:02:03.589 Compiler for C supports arguments -Wpointer-arith: YES 00:02:03.589 Compiler for C supports arguments -Wsign-compare: YES 00:02:03.589 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:03.589 Compiler for C supports arguments -Wundef: YES 00:02:03.589 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.589 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:03.589 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:03.589 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.589 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:03.589 Program objdump found: YES (/usr/bin/objdump) 00:02:03.589 Compiler for C supports arguments -mavx512f: YES 00:02:03.589 Checking if "AVX512 checking" compiles: YES 00:02:03.589 Fetching value of define "__SSE4_2__" : 1 00:02:03.589 Fetching value of define "__AES__" : 1 00:02:03.589 Fetching value of define "__AVX__" : 1 00:02:03.589 Fetching value of define "__AVX2__" : 1 00:02:03.589 Fetching value of define "__AVX512BW__" : 1 00:02:03.589 Fetching value of define "__AVX512CD__" : 1 00:02:03.589 Fetching value of define "__AVX512DQ__" : 1 00:02:03.589 Fetching value of define "__AVX512F__" : 1 00:02:03.589 Fetching value of define "__AVX512VL__" : 1 00:02:03.589 Fetching value of define "__PCLMUL__" : 1 00:02:03.589 Fetching value of define "__RDRND__" : 1 00:02:03.589 Fetching value of define "__RDSEED__" : 1 00:02:03.589 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:03.589 Fetching value of define "__znver1__" : (undefined) 00:02:03.589 Fetching value of define "__znver2__" : (undefined) 00:02:03.589 Fetching value of define "__znver3__" : (undefined) 00:02:03.589 Fetching value of define "__znver4__" : (undefined) 00:02:03.589 Library asan found: YES 00:02:03.589 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:03.589 Message: lib/log: Defining dependency "log" 00:02:03.589 Message: lib/kvargs: Defining dependency "kvargs" 00:02:03.589 Message: lib/telemetry: Defining dependency "telemetry" 00:02:03.589 Library rt found: YES 00:02:03.589 Checking for function "getentropy" : NO 00:02:03.589 Message: lib/eal: Defining dependency "eal" 00:02:03.589 Message: lib/ring: Defining dependency "ring" 00:02:03.589 Message: lib/rcu: Defining dependency "rcu" 00:02:03.589 Message: lib/mempool: Defining dependency "mempool" 00:02:03.589 Message: lib/mbuf: Defining dependency "mbuf" 00:02:03.589 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:03.589 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.589 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.589 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:03.589 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:03.589 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:03.589 Compiler for C supports arguments -mpclmul: YES 00:02:03.589 Compiler for C supports arguments -maes: YES 00:02:03.589 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.589 Compiler for C supports arguments -mavx512bw: YES 00:02:03.589 Compiler for C supports arguments -mavx512dq: YES 00:02:03.589 Compiler for C supports arguments -mavx512vl: YES 00:02:03.589 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:03.589 Compiler for C supports arguments -mavx2: YES 00:02:03.589 Compiler for C supports arguments -mavx: YES 00:02:03.589 Message: lib/net: Defining dependency "net" 00:02:03.589 Message: lib/meter: Defining dependency "meter" 00:02:03.589 Message: lib/ethdev: Defining dependency "ethdev" 00:02:03.589 Message: lib/pci: Defining dependency "pci" 00:02:03.589 Message: lib/cmdline: Defining dependency "cmdline" 00:02:03.589 Message: lib/hash: Defining dependency "hash" 00:02:03.589 Message: lib/timer: Defining dependency "timer" 00:02:03.589 Message: lib/compressdev: Defining dependency "compressdev" 00:02:03.589 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:03.589 Message: lib/dmadev: Defining dependency "dmadev" 00:02:03.589 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:03.589 Message: lib/power: Defining dependency "power" 00:02:03.589 Message: lib/reorder: Defining dependency "reorder" 00:02:03.589 Message: lib/security: Defining dependency "security" 00:02:03.589 Has header "linux/userfaultfd.h" : YES 00:02:03.589 Has header "linux/vduse.h" : YES 00:02:03.589 Message: lib/vhost: Defining dependency "vhost" 00:02:03.589 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:03.589 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:03.589 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:03.589 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:03.589 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:03.589 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:03.589 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:03.589 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:03.589 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:03.589 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:03.589 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:03.589 Configuring doxy-api-html.conf using configuration 00:02:03.589 Configuring doxy-api-man.conf using configuration 00:02:03.589 Program mandb found: YES (/usr/bin/mandb) 00:02:03.589 Program sphinx-build found: NO 00:02:03.589 Configuring rte_build_config.h using configuration 00:02:03.589 Message: 00:02:03.589 ================= 00:02:03.589 Applications Enabled 00:02:03.589 ================= 00:02:03.589 00:02:03.589 apps: 00:02:03.589 00:02:03.589 00:02:03.589 Message: 00:02:03.589 ================= 00:02:03.589 Libraries Enabled 00:02:03.589 ================= 00:02:03.589 00:02:03.589 libs: 00:02:03.589 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:03.589 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:03.589 cryptodev, dmadev, power, reorder, security, vhost, 00:02:03.589 00:02:03.589 Message: 00:02:03.589 =============== 00:02:03.589 Drivers Enabled 00:02:03.589 =============== 00:02:03.589 00:02:03.589 common: 00:02:03.589 00:02:03.589 bus: 00:02:03.589 pci, vdev, 00:02:03.589 mempool: 00:02:03.589 ring, 00:02:03.589 dma: 00:02:03.589 00:02:03.589 net: 00:02:03.589 00:02:03.589 crypto: 00:02:03.589 00:02:03.589 compress: 00:02:03.589 00:02:03.589 vdpa: 00:02:03.589 00:02:03.589 00:02:03.589 Message: 00:02:03.589 ================= 00:02:03.589 Content Skipped 00:02:03.589 ================= 00:02:03.589 00:02:03.589 apps: 00:02:03.589 dumpcap: explicitly disabled via build config 00:02:03.589 graph: explicitly disabled via build config 00:02:03.589 pdump: explicitly disabled via build config 00:02:03.589 proc-info: explicitly disabled via build config 00:02:03.589 test-acl: explicitly disabled via build config 00:02:03.589 test-bbdev: explicitly disabled via build config 00:02:03.589 test-cmdline: explicitly disabled via build config 00:02:03.589 test-compress-perf: explicitly disabled via build config 00:02:03.589 test-crypto-perf: explicitly disabled via build config 00:02:03.589 test-dma-perf: explicitly disabled via build config 00:02:03.589 test-eventdev: explicitly disabled via build config 00:02:03.589 test-fib: explicitly disabled via build config 00:02:03.589 test-flow-perf: explicitly disabled via build config 00:02:03.589 test-gpudev: explicitly disabled via build config 00:02:03.589 test-mldev: explicitly disabled via build config 00:02:03.589 test-pipeline: explicitly disabled via build config 00:02:03.589 test-pmd: explicitly disabled via build config 00:02:03.589 test-regex: explicitly disabled via build config 00:02:03.589 test-sad: explicitly disabled via build config 00:02:03.589 test-security-perf: explicitly disabled via build config 00:02:03.589 00:02:03.589 libs: 00:02:03.589 argparse: explicitly disabled via build config 00:02:03.590 metrics: explicitly disabled via build config 00:02:03.590 acl: explicitly disabled via build config 00:02:03.590 bbdev: explicitly disabled via build config 00:02:03.590 bitratestats: explicitly disabled via build config 00:02:03.590 bpf: explicitly disabled via build config 00:02:03.590 cfgfile: explicitly disabled via build config 00:02:03.590 distributor: explicitly disabled via build config 00:02:03.590 efd: explicitly disabled via build config 00:02:03.590 eventdev: explicitly disabled via build config 00:02:03.590 dispatcher: explicitly disabled via build config 00:02:03.590 gpudev: explicitly disabled via build config 00:02:03.590 gro: explicitly disabled via build config 00:02:03.590 gso: explicitly disabled via build config 00:02:03.590 ip_frag: explicitly disabled via build config 00:02:03.590 jobstats: explicitly disabled via build config 00:02:03.590 latencystats: explicitly disabled via build config 00:02:03.590 lpm: explicitly disabled via build config 00:02:03.590 member: explicitly disabled via build config 00:02:03.590 pcapng: explicitly disabled via build config 00:02:03.590 rawdev: explicitly disabled via build config 00:02:03.590 regexdev: explicitly disabled via build config 00:02:03.590 mldev: explicitly disabled via build config 00:02:03.590 rib: explicitly disabled via build config 00:02:03.590 sched: explicitly disabled via build config 00:02:03.590 stack: explicitly disabled via build config 00:02:03.590 ipsec: explicitly disabled via build config 00:02:03.590 pdcp: explicitly disabled via build config 00:02:03.590 fib: explicitly disabled via build config 00:02:03.590 port: explicitly disabled via build config 00:02:03.590 pdump: explicitly disabled via build config 00:02:03.590 table: explicitly disabled via build config 00:02:03.590 pipeline: explicitly disabled via build config 00:02:03.590 graph: explicitly disabled via build config 00:02:03.590 node: explicitly disabled via build config 00:02:03.590 00:02:03.590 drivers: 00:02:03.590 common/cpt: not in enabled drivers build config 00:02:03.590 common/dpaax: not in enabled drivers build config 00:02:03.590 common/iavf: not in enabled drivers build config 00:02:03.590 common/idpf: not in enabled drivers build config 00:02:03.590 common/ionic: not in enabled drivers build config 00:02:03.590 common/mvep: not in enabled drivers build config 00:02:03.590 common/octeontx: not in enabled drivers build config 00:02:03.590 bus/auxiliary: not in enabled drivers build config 00:02:03.590 bus/cdx: not in enabled drivers build config 00:02:03.590 bus/dpaa: not in enabled drivers build config 00:02:03.590 bus/fslmc: not in enabled drivers build config 00:02:03.590 bus/ifpga: not in enabled drivers build config 00:02:03.590 bus/platform: not in enabled drivers build config 00:02:03.590 bus/uacce: not in enabled drivers build config 00:02:03.590 bus/vmbus: not in enabled drivers build config 00:02:03.590 common/cnxk: not in enabled drivers build config 00:02:03.590 common/mlx5: not in enabled drivers build config 00:02:03.590 common/nfp: not in enabled drivers build config 00:02:03.590 common/nitrox: not in enabled drivers build config 00:02:03.590 common/qat: not in enabled drivers build config 00:02:03.590 common/sfc_efx: not in enabled drivers build config 00:02:03.590 mempool/bucket: not in enabled drivers build config 00:02:03.590 mempool/cnxk: not in enabled drivers build config 00:02:03.590 mempool/dpaa: not in enabled drivers build config 00:02:03.590 mempool/dpaa2: not in enabled drivers build config 00:02:03.590 mempool/octeontx: not in enabled drivers build config 00:02:03.590 mempool/stack: not in enabled drivers build config 00:02:03.590 dma/cnxk: not in enabled drivers build config 00:02:03.590 dma/dpaa: not in enabled drivers build config 00:02:03.590 dma/dpaa2: not in enabled drivers build config 00:02:03.590 dma/hisilicon: not in enabled drivers build config 00:02:03.590 dma/idxd: not in enabled drivers build config 00:02:03.590 dma/ioat: not in enabled drivers build config 00:02:03.590 dma/skeleton: not in enabled drivers build config 00:02:03.590 net/af_packet: not in enabled drivers build config 00:02:03.590 net/af_xdp: not in enabled drivers build config 00:02:03.590 net/ark: not in enabled drivers build config 00:02:03.590 net/atlantic: not in enabled drivers build config 00:02:03.590 net/avp: not in enabled drivers build config 00:02:03.590 net/axgbe: not in enabled drivers build config 00:02:03.590 net/bnx2x: not in enabled drivers build config 00:02:03.590 net/bnxt: not in enabled drivers build config 00:02:03.590 net/bonding: not in enabled drivers build config 00:02:03.590 net/cnxk: not in enabled drivers build config 00:02:03.590 net/cpfl: not in enabled drivers build config 00:02:03.590 net/cxgbe: not in enabled drivers build config 00:02:03.590 net/dpaa: not in enabled drivers build config 00:02:03.590 net/dpaa2: not in enabled drivers build config 00:02:03.590 net/e1000: not in enabled drivers build config 00:02:03.590 net/ena: not in enabled drivers build config 00:02:03.590 net/enetc: not in enabled drivers build config 00:02:03.590 net/enetfec: not in enabled drivers build config 00:02:03.590 net/enic: not in enabled drivers build config 00:02:03.590 net/failsafe: not in enabled drivers build config 00:02:03.590 net/fm10k: not in enabled drivers build config 00:02:03.590 net/gve: not in enabled drivers build config 00:02:03.590 net/hinic: not in enabled drivers build config 00:02:03.590 net/hns3: not in enabled drivers build config 00:02:03.590 net/i40e: not in enabled drivers build config 00:02:03.590 net/iavf: not in enabled drivers build config 00:02:03.590 net/ice: not in enabled drivers build config 00:02:03.590 net/idpf: not in enabled drivers build config 00:02:03.590 net/igc: not in enabled drivers build config 00:02:03.590 net/ionic: not in enabled drivers build config 00:02:03.590 net/ipn3ke: not in enabled drivers build config 00:02:03.590 net/ixgbe: not in enabled drivers build config 00:02:03.590 net/mana: not in enabled drivers build config 00:02:03.590 net/memif: not in enabled drivers build config 00:02:03.590 net/mlx4: not in enabled drivers build config 00:02:03.590 net/mlx5: not in enabled drivers build config 00:02:03.590 net/mvneta: not in enabled drivers build config 00:02:03.590 net/mvpp2: not in enabled drivers build config 00:02:03.590 net/netvsc: not in enabled drivers build config 00:02:03.590 net/nfb: not in enabled drivers build config 00:02:03.590 net/nfp: not in enabled drivers build config 00:02:03.590 net/ngbe: not in enabled drivers build config 00:02:03.590 net/null: not in enabled drivers build config 00:02:03.590 net/octeontx: not in enabled drivers build config 00:02:03.590 net/octeon_ep: not in enabled drivers build config 00:02:03.590 net/pcap: not in enabled drivers build config 00:02:03.590 net/pfe: not in enabled drivers build config 00:02:03.590 net/qede: not in enabled drivers build config 00:02:03.590 net/ring: not in enabled drivers build config 00:02:03.590 net/sfc: not in enabled drivers build config 00:02:03.590 net/softnic: not in enabled drivers build config 00:02:03.590 net/tap: not in enabled drivers build config 00:02:03.590 net/thunderx: not in enabled drivers build config 00:02:03.590 net/txgbe: not in enabled drivers build config 00:02:03.590 net/vdev_netvsc: not in enabled drivers build config 00:02:03.590 net/vhost: not in enabled drivers build config 00:02:03.590 net/virtio: not in enabled drivers build config 00:02:03.590 net/vmxnet3: not in enabled drivers build config 00:02:03.590 raw/*: missing internal dependency, "rawdev" 00:02:03.590 crypto/armv8: not in enabled drivers build config 00:02:03.590 crypto/bcmfs: not in enabled drivers build config 00:02:03.590 crypto/caam_jr: not in enabled drivers build config 00:02:03.590 crypto/ccp: not in enabled drivers build config 00:02:03.590 crypto/cnxk: not in enabled drivers build config 00:02:03.590 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.590 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.590 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.590 crypto/mlx5: not in enabled drivers build config 00:02:03.590 crypto/mvsam: not in enabled drivers build config 00:02:03.591 crypto/nitrox: not in enabled drivers build config 00:02:03.591 crypto/null: not in enabled drivers build config 00:02:03.591 crypto/octeontx: not in enabled drivers build config 00:02:03.591 crypto/openssl: not in enabled drivers build config 00:02:03.591 crypto/scheduler: not in enabled drivers build config 00:02:03.591 crypto/uadk: not in enabled drivers build config 00:02:03.591 crypto/virtio: not in enabled drivers build config 00:02:03.591 compress/isal: not in enabled drivers build config 00:02:03.591 compress/mlx5: not in enabled drivers build config 00:02:03.591 compress/nitrox: not in enabled drivers build config 00:02:03.591 compress/octeontx: not in enabled drivers build config 00:02:03.591 compress/zlib: not in enabled drivers build config 00:02:03.591 regex/*: missing internal dependency, "regexdev" 00:02:03.591 ml/*: missing internal dependency, "mldev" 00:02:03.591 vdpa/ifc: not in enabled drivers build config 00:02:03.591 vdpa/mlx5: not in enabled drivers build config 00:02:03.591 vdpa/nfp: not in enabled drivers build config 00:02:03.591 vdpa/sfc: not in enabled drivers build config 00:02:03.591 event/*: missing internal dependency, "eventdev" 00:02:03.591 baseband/*: missing internal dependency, "bbdev" 00:02:03.591 gpu/*: missing internal dependency, "gpudev" 00:02:03.591 00:02:03.591 00:02:03.850 Build targets in project: 84 00:02:03.850 00:02:03.850 DPDK 24.03.0 00:02:03.850 00:02:03.850 User defined options 00:02:03.850 buildtype : debug 00:02:03.850 default_library : shared 00:02:03.850 libdir : lib 00:02:03.850 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:03.850 b_sanitize : address 00:02:03.850 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:03.850 c_link_args : 00:02:03.850 cpu_instruction_set: native 00:02:03.850 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:03.850 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:03.850 enable_docs : false 00:02:03.850 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:03.850 enable_kmods : false 00:02:03.850 max_lcores : 128 00:02:03.850 tests : false 00:02:03.850 00:02:03.850 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.108 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:04.366 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.366 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.366 [3/267] Linking static target lib/librte_kvargs.a 00:02:04.367 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.367 [5/267] Linking static target lib/librte_log.a 00:02:04.367 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.624 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:04.624 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.624 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:04.624 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.624 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.624 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:04.624 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:04.625 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.625 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.625 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.625 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:04.625 [18/267] Linking static target lib/librte_telemetry.a 00:02:04.883 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:04.883 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.883 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:04.883 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.142 [23/267] Linking target lib/librte_log.so.24.1 00:02:05.142 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.142 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.142 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.142 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.142 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:05.142 [29/267] Linking target lib/librte_kvargs.so.24.1 00:02:05.142 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.400 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.400 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.400 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.400 [34/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:05.400 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:05.400 [36/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.400 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.401 [38/267] Linking target lib/librte_telemetry.so.24.1 00:02:05.401 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.401 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:05.401 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.659 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:05.659 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:05.659 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:05.659 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.659 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.659 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.917 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.917 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.917 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:05.917 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.917 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:05.917 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.917 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:06.176 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:06.176 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:06.176 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:06.176 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:06.176 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:06.176 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:06.176 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:06.435 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:06.435 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:06.435 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:06.435 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:06.435 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:06.435 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.693 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:06.693 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.693 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.693 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.693 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.693 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.693 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.693 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.950 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.950 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.950 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.950 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.950 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.950 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.950 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:07.208 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.208 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.208 [85/267] Linking static target lib/librte_ring.a 00:02:07.208 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.208 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.208 [88/267] Linking static target lib/librte_eal.a 00:02:07.208 [89/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.208 [90/267] Linking static target lib/librte_rcu.a 00:02:07.208 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.467 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.467 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.467 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.467 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.467 [96/267] Linking static target lib/librte_mempool.a 00:02:07.467 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.725 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.725 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.725 [100/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.725 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.725 [102/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.725 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.983 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:07.983 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.983 [106/267] Linking static target lib/librte_meter.a 00:02:07.983 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.983 [108/267] Linking static target lib/librte_net.a 00:02:07.983 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.983 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.243 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.243 [112/267] Linking static target lib/librte_mbuf.a 00:02:08.243 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.243 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.243 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.243 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.501 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.501 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.501 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.759 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.759 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.759 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.018 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.018 [124/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.018 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.018 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.018 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.277 [128/267] Linking static target lib/librte_pci.a 00:02:09.277 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.277 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.277 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.277 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.277 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.277 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.277 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.277 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.277 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.277 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.277 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.277 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:09.277 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.277 [142/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.578 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:09.578 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.578 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:09.578 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:09.578 [147/267] Linking static target lib/librte_cmdline.a 00:02:09.578 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:09.578 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:09.837 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.837 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.837 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.837 [153/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:09.837 [154/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.837 [155/267] Linking static target lib/librte_timer.a 00:02:09.837 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:10.096 [157/267] Linking static target lib/librte_ethdev.a 00:02:10.096 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.096 [159/267] Linking static target lib/librte_compressdev.a 00:02:10.096 [160/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:10.096 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:10.096 [162/267] Linking static target lib/librte_hash.a 00:02:10.096 [163/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.355 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:10.355 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.355 [166/267] Linking static target lib/librte_dmadev.a 00:02:10.355 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.355 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:10.613 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:10.613 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:10.614 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:10.614 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.614 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:10.871 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.871 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:10.871 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:10.871 [177/267] Linking static target lib/librte_cryptodev.a 00:02:10.871 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.871 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:10.871 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:10.871 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:11.129 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.129 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.129 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:11.129 [185/267] Linking static target lib/librte_power.a 00:02:11.400 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:11.400 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:11.400 [188/267] Linking static target lib/librte_reorder.a 00:02:11.400 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:11.400 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:11.400 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:11.400 [192/267] Linking static target lib/librte_security.a 00:02:11.684 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.684 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:11.942 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.942 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:11.942 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.942 [198/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.201 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:12.201 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:12.201 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:12.201 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:12.459 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:12.459 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:12.459 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:12.717 [206/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.717 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:12.717 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:12.717 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:12.717 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:12.717 [211/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:12.717 [212/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.717 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:12.717 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.717 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:12.717 [216/267] Linking static target drivers/librte_bus_vdev.a 00:02:12.717 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.975 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.975 [219/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.975 [220/267] Linking static target drivers/librte_bus_pci.a 00:02:12.975 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.975 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.975 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.975 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:12.975 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.232 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.490 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.425 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.425 [229/267] Linking target lib/librte_eal.so.24.1 00:02:14.425 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:14.425 [231/267] Linking target lib/librte_pci.so.24.1 00:02:14.425 [232/267] Linking target lib/librte_ring.so.24.1 00:02:14.425 [233/267] Linking target lib/librte_dmadev.so.24.1 00:02:14.425 [234/267] Linking target lib/librte_meter.so.24.1 00:02:14.425 [235/267] Linking target lib/librte_timer.so.24.1 00:02:14.425 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:14.683 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:14.683 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:14.683 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:14.683 [240/267] Linking target lib/librte_rcu.so.24.1 00:02:14.683 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:14.683 [242/267] Linking target lib/librte_mempool.so.24.1 00:02:14.683 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:14.683 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:14.683 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:14.683 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:14.683 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:14.683 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:14.941 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:14.941 [250/267] Linking target lib/librte_net.so.24.1 00:02:14.941 [251/267] Linking target lib/librte_cryptodev.so.24.1 00:02:14.941 [252/267] Linking target lib/librte_compressdev.so.24.1 00:02:14.941 [253/267] Linking target lib/librte_reorder.so.24.1 00:02:14.941 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.941 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.941 [256/267] Linking target lib/librte_hash.so.24.1 00:02:14.941 [257/267] Linking target lib/librte_security.so.24.1 00:02:14.941 [258/267] Linking target lib/librte_cmdline.so.24.1 00:02:15.199 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:15.457 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.457 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:15.457 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:15.720 [263/267] Linking target lib/librte_power.so.24.1 00:02:16.009 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:16.009 [265/267] Linking static target lib/librte_vhost.a 00:02:17.381 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.381 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:17.381 INFO: autodetecting backend as ninja 00:02:17.381 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:32.257 CC lib/ut_mock/mock.o 00:02:32.257 CC lib/ut/ut.o 00:02:32.257 CC lib/log/log_flags.o 00:02:32.257 CC lib/log/log.o 00:02:32.257 CC lib/log/log_deprecated.o 00:02:32.257 LIB libspdk_ut.a 00:02:32.257 LIB libspdk_ut_mock.a 00:02:32.257 LIB libspdk_log.a 00:02:32.257 SO libspdk_ut.so.2.0 00:02:32.257 SO libspdk_ut_mock.so.6.0 00:02:32.257 SO libspdk_log.so.7.0 00:02:32.257 SYMLINK libspdk_ut.so 00:02:32.257 SYMLINK libspdk_ut_mock.so 00:02:32.257 SYMLINK libspdk_log.so 00:02:32.257 CC lib/dma/dma.o 00:02:32.257 CC lib/util/base64.o 00:02:32.257 CC lib/util/bit_array.o 00:02:32.257 CC lib/util/cpuset.o 00:02:32.257 CC lib/ioat/ioat.o 00:02:32.257 CC lib/util/crc16.o 00:02:32.257 CXX lib/trace_parser/trace.o 00:02:32.257 CC lib/util/crc32.o 00:02:32.257 CC lib/util/crc32c.o 00:02:32.257 CC lib/vfio_user/host/vfio_user_pci.o 00:02:32.257 CC lib/util/crc32_ieee.o 00:02:32.257 CC lib/util/crc64.o 00:02:32.257 CC lib/util/dif.o 00:02:32.257 CC lib/util/fd.o 00:02:32.257 CC lib/util/fd_group.o 00:02:32.257 LIB libspdk_dma.a 00:02:32.257 SO libspdk_dma.so.5.0 00:02:32.257 CC lib/vfio_user/host/vfio_user.o 00:02:32.257 CC lib/util/file.o 00:02:32.257 CC lib/util/hexlify.o 00:02:32.257 SYMLINK libspdk_dma.so 00:02:32.257 CC lib/util/iov.o 00:02:32.257 CC lib/util/math.o 00:02:32.257 LIB libspdk_ioat.a 00:02:32.257 SO libspdk_ioat.so.7.0 00:02:32.257 CC lib/util/net.o 00:02:32.257 CC lib/util/pipe.o 00:02:32.257 SYMLINK libspdk_ioat.so 00:02:32.257 CC lib/util/strerror_tls.o 00:02:32.257 CC lib/util/string.o 00:02:32.257 CC lib/util/uuid.o 00:02:32.257 CC lib/util/xor.o 00:02:32.257 LIB libspdk_vfio_user.a 00:02:32.257 CC lib/util/zipf.o 00:02:32.257 SO libspdk_vfio_user.so.5.0 00:02:32.257 CC lib/util/md5.o 00:02:32.257 SYMLINK libspdk_vfio_user.so 00:02:32.257 LIB libspdk_util.a 00:02:32.257 SO libspdk_util.so.10.0 00:02:32.257 LIB libspdk_trace_parser.a 00:02:32.257 SYMLINK libspdk_util.so 00:02:32.257 SO libspdk_trace_parser.so.6.0 00:02:32.257 SYMLINK libspdk_trace_parser.so 00:02:32.257 CC lib/rdma_provider/common.o 00:02:32.257 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:32.257 CC lib/vmd/vmd.o 00:02:32.257 CC lib/vmd/led.o 00:02:32.257 CC lib/conf/conf.o 00:02:32.257 CC lib/rdma_utils/rdma_utils.o 00:02:32.257 CC lib/env_dpdk/env.o 00:02:32.257 CC lib/json/json_parse.o 00:02:32.257 CC lib/json/json_util.o 00:02:32.257 CC lib/idxd/idxd.o 00:02:32.257 CC lib/idxd/idxd_user.o 00:02:32.257 CC lib/idxd/idxd_kernel.o 00:02:32.257 LIB libspdk_rdma_provider.a 00:02:32.257 SO libspdk_rdma_provider.so.6.0 00:02:32.257 LIB libspdk_conf.a 00:02:32.257 CC lib/json/json_write.o 00:02:32.257 SO libspdk_conf.so.6.0 00:02:32.257 CC lib/env_dpdk/memory.o 00:02:32.257 SYMLINK libspdk_rdma_provider.so 00:02:32.257 CC lib/env_dpdk/pci.o 00:02:32.257 LIB libspdk_rdma_utils.a 00:02:32.257 SO libspdk_rdma_utils.so.1.0 00:02:32.257 SYMLINK libspdk_conf.so 00:02:32.257 CC lib/env_dpdk/init.o 00:02:32.257 CC lib/env_dpdk/threads.o 00:02:32.258 SYMLINK libspdk_rdma_utils.so 00:02:32.258 CC lib/env_dpdk/pci_ioat.o 00:02:32.258 CC lib/env_dpdk/pci_virtio.o 00:02:32.258 CC lib/env_dpdk/pci_vmd.o 00:02:32.258 CC lib/env_dpdk/pci_idxd.o 00:02:32.258 CC lib/env_dpdk/pci_event.o 00:02:32.258 LIB libspdk_idxd.a 00:02:32.258 LIB libspdk_json.a 00:02:32.258 SO libspdk_idxd.so.12.1 00:02:32.258 SO libspdk_json.so.6.0 00:02:32.258 CC lib/env_dpdk/sigbus_handler.o 00:02:32.258 CC lib/env_dpdk/pci_dpdk.o 00:02:32.258 SYMLINK libspdk_idxd.so 00:02:32.258 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:32.258 SYMLINK libspdk_json.so 00:02:32.258 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:32.258 LIB libspdk_vmd.a 00:02:32.258 SO libspdk_vmd.so.6.0 00:02:32.258 SYMLINK libspdk_vmd.so 00:02:32.258 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:32.258 CC lib/jsonrpc/jsonrpc_server.o 00:02:32.258 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:32.258 CC lib/jsonrpc/jsonrpc_client.o 00:02:32.518 LIB libspdk_jsonrpc.a 00:02:32.518 SO libspdk_jsonrpc.so.6.0 00:02:32.518 SYMLINK libspdk_jsonrpc.so 00:02:32.780 CC lib/rpc/rpc.o 00:02:32.780 LIB libspdk_env_dpdk.a 00:02:33.041 LIB libspdk_rpc.a 00:02:33.041 SO libspdk_env_dpdk.so.15.0 00:02:33.041 SO libspdk_rpc.so.6.0 00:02:33.041 SYMLINK libspdk_rpc.so 00:02:33.041 SYMLINK libspdk_env_dpdk.so 00:02:33.301 CC lib/keyring/keyring.o 00:02:33.301 CC lib/keyring/keyring_rpc.o 00:02:33.301 CC lib/trace/trace_flags.o 00:02:33.301 CC lib/trace/trace.o 00:02:33.301 CC lib/trace/trace_rpc.o 00:02:33.301 CC lib/notify/notify.o 00:02:33.301 CC lib/notify/notify_rpc.o 00:02:33.301 LIB libspdk_notify.a 00:02:33.564 SO libspdk_notify.so.6.0 00:02:33.564 LIB libspdk_keyring.a 00:02:33.564 SYMLINK libspdk_notify.so 00:02:33.564 LIB libspdk_trace.a 00:02:33.564 SO libspdk_keyring.so.2.0 00:02:33.564 SO libspdk_trace.so.11.0 00:02:33.564 SYMLINK libspdk_keyring.so 00:02:33.564 SYMLINK libspdk_trace.so 00:02:33.825 CC lib/sock/sock.o 00:02:33.825 CC lib/sock/sock_rpc.o 00:02:33.825 CC lib/thread/iobuf.o 00:02:33.825 CC lib/thread/thread.o 00:02:34.396 LIB libspdk_sock.a 00:02:34.396 SO libspdk_sock.so.10.0 00:02:34.396 SYMLINK libspdk_sock.so 00:02:34.657 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:34.657 CC lib/nvme/nvme_ctrlr.o 00:02:34.657 CC lib/nvme/nvme_fabric.o 00:02:34.657 CC lib/nvme/nvme_ns.o 00:02:34.657 CC lib/nvme/nvme_ns_cmd.o 00:02:34.657 CC lib/nvme/nvme_pcie_common.o 00:02:34.657 CC lib/nvme/nvme.o 00:02:34.657 CC lib/nvme/nvme_qpair.o 00:02:34.657 CC lib/nvme/nvme_pcie.o 00:02:35.232 CC lib/nvme/nvme_quirks.o 00:02:35.232 CC lib/nvme/nvme_transport.o 00:02:35.494 CC lib/nvme/nvme_discovery.o 00:02:35.494 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:35.494 LIB libspdk_thread.a 00:02:35.494 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:35.494 CC lib/nvme/nvme_tcp.o 00:02:35.494 SO libspdk_thread.so.10.1 00:02:35.494 CC lib/nvme/nvme_opal.o 00:02:35.494 SYMLINK libspdk_thread.so 00:02:35.494 CC lib/nvme/nvme_io_msg.o 00:02:35.755 CC lib/nvme/nvme_poll_group.o 00:02:35.755 CC lib/nvme/nvme_zns.o 00:02:36.016 CC lib/nvme/nvme_stubs.o 00:02:36.016 CC lib/nvme/nvme_auth.o 00:02:36.016 CC lib/nvme/nvme_cuse.o 00:02:36.016 CC lib/nvme/nvme_rdma.o 00:02:36.277 CC lib/accel/accel.o 00:02:36.277 CC lib/accel/accel_rpc.o 00:02:36.277 CC lib/blob/blobstore.o 00:02:36.535 CC lib/blob/request.o 00:02:36.535 CC lib/init/json_config.o 00:02:36.535 CC lib/init/subsystem.o 00:02:36.796 CC lib/blob/zeroes.o 00:02:36.796 CC lib/accel/accel_sw.o 00:02:36.796 CC lib/init/subsystem_rpc.o 00:02:36.796 CC lib/init/rpc.o 00:02:36.796 CC lib/blob/blob_bs_dev.o 00:02:37.057 LIB libspdk_init.a 00:02:37.057 SO libspdk_init.so.6.0 00:02:37.057 CC lib/virtio/virtio.o 00:02:37.057 CC lib/virtio/virtio_vhost_user.o 00:02:37.057 CC lib/fsdev/fsdev.o 00:02:37.057 SYMLINK libspdk_init.so 00:02:37.057 CC lib/fsdev/fsdev_io.o 00:02:37.057 CC lib/virtio/virtio_vfio_user.o 00:02:37.057 CC lib/virtio/virtio_pci.o 00:02:37.317 CC lib/event/app.o 00:02:37.317 CC lib/fsdev/fsdev_rpc.o 00:02:37.317 CC lib/event/reactor.o 00:02:37.317 CC lib/event/log_rpc.o 00:02:37.317 CC lib/event/app_rpc.o 00:02:37.317 CC lib/event/scheduler_static.o 00:02:37.574 LIB libspdk_virtio.a 00:02:37.574 LIB libspdk_nvme.a 00:02:37.574 SO libspdk_virtio.so.7.0 00:02:37.574 LIB libspdk_accel.a 00:02:37.574 SYMLINK libspdk_virtio.so 00:02:37.574 SO libspdk_accel.so.16.0 00:02:37.574 SO libspdk_nvme.so.14.0 00:02:37.574 SYMLINK libspdk_accel.so 00:02:37.574 LIB libspdk_fsdev.a 00:02:37.832 SO libspdk_fsdev.so.1.0 00:02:37.832 LIB libspdk_event.a 00:02:37.832 SYMLINK libspdk_fsdev.so 00:02:37.832 SYMLINK libspdk_nvme.so 00:02:37.832 SO libspdk_event.so.14.0 00:02:37.832 CC lib/bdev/bdev.o 00:02:37.832 CC lib/bdev/bdev_rpc.o 00:02:37.832 CC lib/bdev/part.o 00:02:37.832 CC lib/bdev/bdev_zone.o 00:02:37.832 CC lib/bdev/scsi_nvme.o 00:02:37.832 SYMLINK libspdk_event.so 00:02:37.832 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:38.400 LIB libspdk_fuse_dispatcher.a 00:02:38.400 SO libspdk_fuse_dispatcher.so.1.0 00:02:38.660 SYMLINK libspdk_fuse_dispatcher.so 00:02:39.638 LIB libspdk_blob.a 00:02:39.638 SO libspdk_blob.so.11.0 00:02:39.898 SYMLINK libspdk_blob.so 00:02:39.898 CC lib/lvol/lvol.o 00:02:39.898 CC lib/blobfs/blobfs.o 00:02:39.898 CC lib/blobfs/tree.o 00:02:40.467 LIB libspdk_bdev.a 00:02:40.467 SO libspdk_bdev.so.16.0 00:02:40.467 SYMLINK libspdk_bdev.so 00:02:40.725 CC lib/ftl/ftl_core.o 00:02:40.725 CC lib/ftl/ftl_init.o 00:02:40.725 CC lib/ftl/ftl_layout.o 00:02:40.725 CC lib/ftl/ftl_debug.o 00:02:40.726 CC lib/scsi/dev.o 00:02:40.726 CC lib/nvmf/ctrlr.o 00:02:40.726 CC lib/nbd/nbd.o 00:02:40.726 CC lib/ublk/ublk.o 00:02:40.726 CC lib/ftl/ftl_io.o 00:02:40.726 CC lib/scsi/lun.o 00:02:40.726 CC lib/scsi/port.o 00:02:40.984 LIB libspdk_blobfs.a 00:02:40.984 SO libspdk_blobfs.so.10.0 00:02:40.984 LIB libspdk_lvol.a 00:02:40.984 CC lib/nbd/nbd_rpc.o 00:02:40.984 SO libspdk_lvol.so.10.0 00:02:40.984 SYMLINK libspdk_blobfs.so 00:02:40.984 CC lib/scsi/scsi.o 00:02:40.984 CC lib/ublk/ublk_rpc.o 00:02:40.984 CC lib/nvmf/ctrlr_discovery.o 00:02:40.984 CC lib/ftl/ftl_sb.o 00:02:40.984 SYMLINK libspdk_lvol.so 00:02:40.984 CC lib/nvmf/ctrlr_bdev.o 00:02:40.984 CC lib/nvmf/subsystem.o 00:02:40.984 CC lib/scsi/scsi_bdev.o 00:02:40.984 CC lib/ftl/ftl_l2p.o 00:02:40.984 LIB libspdk_nbd.a 00:02:41.242 SO libspdk_nbd.so.7.0 00:02:41.242 CC lib/nvmf/nvmf.o 00:02:41.242 CC lib/nvmf/nvmf_rpc.o 00:02:41.242 SYMLINK libspdk_nbd.so 00:02:41.242 CC lib/nvmf/transport.o 00:02:41.242 CC lib/ftl/ftl_l2p_flat.o 00:02:41.242 LIB libspdk_ublk.a 00:02:41.242 SO libspdk_ublk.so.3.0 00:02:41.242 CC lib/ftl/ftl_nv_cache.o 00:02:41.242 SYMLINK libspdk_ublk.so 00:02:41.501 CC lib/ftl/ftl_band.o 00:02:41.501 CC lib/scsi/scsi_pr.o 00:02:41.501 CC lib/nvmf/tcp.o 00:02:41.762 CC lib/scsi/scsi_rpc.o 00:02:41.762 CC lib/nvmf/stubs.o 00:02:41.762 CC lib/ftl/ftl_band_ops.o 00:02:41.762 CC lib/scsi/task.o 00:02:41.762 CC lib/nvmf/mdns_server.o 00:02:42.021 LIB libspdk_scsi.a 00:02:42.021 SO libspdk_scsi.so.9.0 00:02:42.021 CC lib/ftl/ftl_writer.o 00:02:42.021 CC lib/nvmf/rdma.o 00:02:42.021 CC lib/nvmf/auth.o 00:02:42.021 SYMLINK libspdk_scsi.so 00:02:42.021 CC lib/ftl/ftl_rq.o 00:02:42.021 CC lib/ftl/ftl_reloc.o 00:02:42.021 CC lib/ftl/ftl_l2p_cache.o 00:02:42.021 CC lib/ftl/ftl_p2l.o 00:02:42.279 CC lib/ftl/ftl_p2l_log.o 00:02:42.279 CC lib/ftl/mngt/ftl_mngt.o 00:02:42.279 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:42.279 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:42.279 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:42.537 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:42.537 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:42.537 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:42.537 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:42.537 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:42.537 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:42.537 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:42.537 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:42.537 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:42.537 CC lib/ftl/utils/ftl_conf.o 00:02:42.794 CC lib/ftl/utils/ftl_md.o 00:02:42.794 CC lib/ftl/utils/ftl_mempool.o 00:02:42.794 CC lib/ftl/utils/ftl_bitmap.o 00:02:42.794 CC lib/ftl/utils/ftl_property.o 00:02:42.794 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:42.794 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:42.794 CC lib/iscsi/conn.o 00:02:42.794 CC lib/iscsi/init_grp.o 00:02:42.794 CC lib/iscsi/iscsi.o 00:02:43.053 CC lib/iscsi/param.o 00:02:43.053 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:43.053 CC lib/iscsi/portal_grp.o 00:02:43.053 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:43.053 CC lib/iscsi/tgt_node.o 00:02:43.053 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:43.053 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:43.311 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:43.311 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:43.311 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:43.311 CC lib/iscsi/iscsi_subsystem.o 00:02:43.311 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:43.311 CC lib/vhost/vhost.o 00:02:43.311 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:43.311 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:43.311 CC lib/iscsi/iscsi_rpc.o 00:02:43.311 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:43.569 CC lib/iscsi/task.o 00:02:43.569 CC lib/vhost/vhost_rpc.o 00:02:43.569 CC lib/vhost/vhost_scsi.o 00:02:43.569 CC lib/vhost/vhost_blk.o 00:02:43.569 CC lib/vhost/rte_vhost_user.o 00:02:43.569 CC lib/ftl/base/ftl_base_dev.o 00:02:43.569 CC lib/ftl/base/ftl_base_bdev.o 00:02:43.569 CC lib/ftl/ftl_trace.o 00:02:43.827 LIB libspdk_ftl.a 00:02:44.086 SO libspdk_ftl.so.9.0 00:02:44.086 LIB libspdk_nvmf.a 00:02:44.344 SO libspdk_nvmf.so.19.0 00:02:44.344 LIB libspdk_iscsi.a 00:02:44.344 SYMLINK libspdk_ftl.so 00:02:44.344 SO libspdk_iscsi.so.8.0 00:02:44.344 SYMLINK libspdk_nvmf.so 00:02:44.344 LIB libspdk_vhost.a 00:02:44.344 SO libspdk_vhost.so.8.0 00:02:44.603 SYMLINK libspdk_iscsi.so 00:02:44.603 SYMLINK libspdk_vhost.so 00:02:44.862 CC module/env_dpdk/env_dpdk_rpc.o 00:02:44.862 CC module/accel/error/accel_error.o 00:02:44.862 CC module/accel/dsa/accel_dsa.o 00:02:44.862 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:44.862 CC module/accel/iaa/accel_iaa.o 00:02:44.862 CC module/accel/ioat/accel_ioat.o 00:02:44.862 CC module/sock/posix/posix.o 00:02:44.862 CC module/blob/bdev/blob_bdev.o 00:02:44.862 CC module/fsdev/aio/fsdev_aio.o 00:02:44.862 CC module/keyring/file/keyring.o 00:02:44.862 LIB libspdk_env_dpdk_rpc.a 00:02:44.862 SO libspdk_env_dpdk_rpc.so.6.0 00:02:45.121 SYMLINK libspdk_env_dpdk_rpc.so 00:02:45.121 CC module/keyring/file/keyring_rpc.o 00:02:45.121 LIB libspdk_scheduler_dynamic.a 00:02:45.121 CC module/accel/error/accel_error_rpc.o 00:02:45.121 SO libspdk_scheduler_dynamic.so.4.0 00:02:45.121 CC module/accel/iaa/accel_iaa_rpc.o 00:02:45.121 CC module/accel/ioat/accel_ioat_rpc.o 00:02:45.121 SYMLINK libspdk_scheduler_dynamic.so 00:02:45.121 CC module/accel/dsa/accel_dsa_rpc.o 00:02:45.121 LIB libspdk_keyring_file.a 00:02:45.121 LIB libspdk_accel_error.a 00:02:45.121 LIB libspdk_accel_iaa.a 00:02:45.121 SO libspdk_keyring_file.so.2.0 00:02:45.121 LIB libspdk_blob_bdev.a 00:02:45.121 SO libspdk_accel_error.so.2.0 00:02:45.121 LIB libspdk_accel_dsa.a 00:02:45.121 SO libspdk_accel_iaa.so.3.0 00:02:45.121 LIB libspdk_accel_ioat.a 00:02:45.121 SO libspdk_blob_bdev.so.11.0 00:02:45.121 SO libspdk_accel_ioat.so.6.0 00:02:45.121 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:45.121 SO libspdk_accel_dsa.so.5.0 00:02:45.121 SYMLINK libspdk_keyring_file.so 00:02:45.121 SYMLINK libspdk_accel_error.so 00:02:45.121 SYMLINK libspdk_accel_iaa.so 00:02:45.121 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:45.121 SYMLINK libspdk_blob_bdev.so 00:02:45.121 SYMLINK libspdk_accel_ioat.so 00:02:45.121 CC module/fsdev/aio/linux_aio_mgr.o 00:02:45.380 SYMLINK libspdk_accel_dsa.so 00:02:45.380 CC module/scheduler/gscheduler/gscheduler.o 00:02:45.380 LIB libspdk_scheduler_dpdk_governor.a 00:02:45.380 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:45.380 CC module/keyring/linux/keyring.o 00:02:45.380 CC module/keyring/linux/keyring_rpc.o 00:02:45.380 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:45.380 LIB libspdk_scheduler_gscheduler.a 00:02:45.380 LIB libspdk_fsdev_aio.a 00:02:45.380 SO libspdk_scheduler_gscheduler.so.4.0 00:02:45.380 LIB libspdk_sock_posix.a 00:02:45.380 CC module/blobfs/bdev/blobfs_bdev.o 00:02:45.380 SO libspdk_fsdev_aio.so.1.0 00:02:45.380 CC module/bdev/delay/vbdev_delay.o 00:02:45.380 CC module/bdev/error/vbdev_error.o 00:02:45.380 SO libspdk_sock_posix.so.6.0 00:02:45.380 SYMLINK libspdk_scheduler_gscheduler.so 00:02:45.380 LIB libspdk_keyring_linux.a 00:02:45.380 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:45.638 SO libspdk_keyring_linux.so.1.0 00:02:45.638 SYMLINK libspdk_fsdev_aio.so 00:02:45.638 CC module/bdev/error/vbdev_error_rpc.o 00:02:45.638 CC module/bdev/gpt/gpt.o 00:02:45.638 CC module/bdev/lvol/vbdev_lvol.o 00:02:45.638 SYMLINK libspdk_sock_posix.so 00:02:45.638 CC module/bdev/gpt/vbdev_gpt.o 00:02:45.638 SYMLINK libspdk_keyring_linux.so 00:02:45.638 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:45.638 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:45.638 CC module/bdev/malloc/bdev_malloc.o 00:02:45.638 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:45.638 LIB libspdk_bdev_error.a 00:02:45.638 LIB libspdk_blobfs_bdev.a 00:02:45.638 SO libspdk_bdev_error.so.6.0 00:02:45.638 SO libspdk_blobfs_bdev.so.6.0 00:02:45.638 LIB libspdk_bdev_delay.a 00:02:45.638 SYMLINK libspdk_bdev_error.so 00:02:45.897 SO libspdk_bdev_delay.so.6.0 00:02:45.897 SYMLINK libspdk_blobfs_bdev.so 00:02:45.897 CC module/bdev/null/bdev_null.o 00:02:45.897 SYMLINK libspdk_bdev_delay.so 00:02:45.897 LIB libspdk_bdev_gpt.a 00:02:45.897 CC module/bdev/null/bdev_null_rpc.o 00:02:45.897 CC module/bdev/nvme/bdev_nvme.o 00:02:45.897 SO libspdk_bdev_gpt.so.6.0 00:02:45.897 CC module/bdev/passthru/vbdev_passthru.o 00:02:45.897 CC module/bdev/raid/bdev_raid.o 00:02:45.897 CC module/bdev/split/vbdev_split.o 00:02:45.897 SYMLINK libspdk_bdev_gpt.so 00:02:45.897 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:45.897 LIB libspdk_bdev_malloc.a 00:02:45.897 LIB libspdk_bdev_lvol.a 00:02:45.897 SO libspdk_bdev_lvol.so.6.0 00:02:45.897 SO libspdk_bdev_malloc.so.6.0 00:02:45.897 CC module/bdev/raid/bdev_raid_rpc.o 00:02:45.897 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:46.173 SYMLINK libspdk_bdev_lvol.so 00:02:46.173 CC module/bdev/raid/bdev_raid_sb.o 00:02:46.173 SYMLINK libspdk_bdev_malloc.so 00:02:46.173 CC module/bdev/raid/raid0.o 00:02:46.173 LIB libspdk_bdev_null.a 00:02:46.173 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:46.173 CC module/bdev/split/vbdev_split_rpc.o 00:02:46.173 SO libspdk_bdev_null.so.6.0 00:02:46.173 SYMLINK libspdk_bdev_null.so 00:02:46.173 CC module/bdev/nvme/nvme_rpc.o 00:02:46.173 LIB libspdk_bdev_passthru.a 00:02:46.173 SO libspdk_bdev_passthru.so.6.0 00:02:46.173 CC module/bdev/nvme/bdev_mdns_client.o 00:02:46.173 CC module/bdev/raid/raid1.o 00:02:46.173 LIB libspdk_bdev_split.a 00:02:46.173 SYMLINK libspdk_bdev_passthru.so 00:02:46.173 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:46.173 SO libspdk_bdev_split.so.6.0 00:02:46.431 CC module/bdev/nvme/vbdev_opal.o 00:02:46.431 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:46.431 SYMLINK libspdk_bdev_split.so 00:02:46.431 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:46.431 LIB libspdk_bdev_zone_block.a 00:02:46.431 CC module/bdev/xnvme/bdev_xnvme.o 00:02:46.431 SO libspdk_bdev_zone_block.so.6.0 00:02:46.431 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:46.431 SYMLINK libspdk_bdev_zone_block.so 00:02:46.431 CC module/bdev/raid/concat.o 00:02:46.431 CC module/bdev/aio/bdev_aio.o 00:02:46.431 CC module/bdev/aio/bdev_aio_rpc.o 00:02:46.689 LIB libspdk_bdev_xnvme.a 00:02:46.690 CC module/bdev/ftl/bdev_ftl.o 00:02:46.690 SO libspdk_bdev_xnvme.so.3.0 00:02:46.690 CC module/bdev/iscsi/bdev_iscsi.o 00:02:46.690 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:46.690 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:46.690 SYMLINK libspdk_bdev_xnvme.so 00:02:46.690 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:46.690 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:46.690 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:46.948 LIB libspdk_bdev_aio.a 00:02:46.948 SO libspdk_bdev_aio.so.6.0 00:02:46.948 LIB libspdk_bdev_ftl.a 00:02:46.948 SO libspdk_bdev_ftl.so.6.0 00:02:46.948 SYMLINK libspdk_bdev_aio.so 00:02:46.948 SYMLINK libspdk_bdev_ftl.so 00:02:46.948 LIB libspdk_bdev_iscsi.a 00:02:46.948 LIB libspdk_bdev_raid.a 00:02:46.948 SO libspdk_bdev_iscsi.so.6.0 00:02:46.948 SO libspdk_bdev_raid.so.6.0 00:02:46.948 SYMLINK libspdk_bdev_iscsi.so 00:02:47.206 SYMLINK libspdk_bdev_raid.so 00:02:47.206 LIB libspdk_bdev_virtio.a 00:02:47.464 SO libspdk_bdev_virtio.so.6.0 00:02:47.464 SYMLINK libspdk_bdev_virtio.so 00:02:47.723 LIB libspdk_bdev_nvme.a 00:02:47.723 SO libspdk_bdev_nvme.so.7.0 00:02:47.982 SYMLINK libspdk_bdev_nvme.so 00:02:48.240 CC module/event/subsystems/iobuf/iobuf.o 00:02:48.240 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:48.240 CC module/event/subsystems/fsdev/fsdev.o 00:02:48.240 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:48.240 CC module/event/subsystems/vmd/vmd.o 00:02:48.240 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:48.240 CC module/event/subsystems/keyring/keyring.o 00:02:48.240 CC module/event/subsystems/scheduler/scheduler.o 00:02:48.240 CC module/event/subsystems/sock/sock.o 00:02:48.498 LIB libspdk_event_keyring.a 00:02:48.498 LIB libspdk_event_fsdev.a 00:02:48.498 LIB libspdk_event_vhost_blk.a 00:02:48.498 LIB libspdk_event_vmd.a 00:02:48.498 LIB libspdk_event_scheduler.a 00:02:48.498 LIB libspdk_event_iobuf.a 00:02:48.498 SO libspdk_event_keyring.so.1.0 00:02:48.498 SO libspdk_event_vhost_blk.so.3.0 00:02:48.498 SO libspdk_event_fsdev.so.1.0 00:02:48.498 SO libspdk_event_scheduler.so.4.0 00:02:48.498 SO libspdk_event_vmd.so.6.0 00:02:48.498 LIB libspdk_event_sock.a 00:02:48.498 SO libspdk_event_iobuf.so.3.0 00:02:48.498 SO libspdk_event_sock.so.5.0 00:02:48.498 SYMLINK libspdk_event_fsdev.so 00:02:48.498 SYMLINK libspdk_event_vhost_blk.so 00:02:48.498 SYMLINK libspdk_event_keyring.so 00:02:48.498 SYMLINK libspdk_event_scheduler.so 00:02:48.498 SYMLINK libspdk_event_vmd.so 00:02:48.498 SYMLINK libspdk_event_iobuf.so 00:02:48.498 SYMLINK libspdk_event_sock.so 00:02:48.756 CC module/event/subsystems/accel/accel.o 00:02:48.756 LIB libspdk_event_accel.a 00:02:48.756 SO libspdk_event_accel.so.6.0 00:02:49.015 SYMLINK libspdk_event_accel.so 00:02:49.015 CC module/event/subsystems/bdev/bdev.o 00:02:49.274 LIB libspdk_event_bdev.a 00:02:49.274 SO libspdk_event_bdev.so.6.0 00:02:49.274 SYMLINK libspdk_event_bdev.so 00:02:49.532 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:49.532 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:49.532 CC module/event/subsystems/ublk/ublk.o 00:02:49.532 CC module/event/subsystems/nbd/nbd.o 00:02:49.532 CC module/event/subsystems/scsi/scsi.o 00:02:49.532 LIB libspdk_event_nbd.a 00:02:49.532 SO libspdk_event_nbd.so.6.0 00:02:49.532 LIB libspdk_event_ublk.a 00:02:49.532 LIB libspdk_event_scsi.a 00:02:49.532 SO libspdk_event_ublk.so.3.0 00:02:49.532 LIB libspdk_event_nvmf.a 00:02:49.532 SO libspdk_event_scsi.so.6.0 00:02:49.532 SO libspdk_event_nvmf.so.6.0 00:02:49.532 SYMLINK libspdk_event_nbd.so 00:02:49.789 SYMLINK libspdk_event_ublk.so 00:02:49.789 SYMLINK libspdk_event_scsi.so 00:02:49.789 SYMLINK libspdk_event_nvmf.so 00:02:49.789 CC module/event/subsystems/iscsi/iscsi.o 00:02:49.789 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:50.048 LIB libspdk_event_vhost_scsi.a 00:02:50.048 LIB libspdk_event_iscsi.a 00:02:50.048 SO libspdk_event_iscsi.so.6.0 00:02:50.048 SO libspdk_event_vhost_scsi.so.3.0 00:02:50.048 SYMLINK libspdk_event_vhost_scsi.so 00:02:50.048 SYMLINK libspdk_event_iscsi.so 00:02:50.308 SO libspdk.so.6.0 00:02:50.308 SYMLINK libspdk.so 00:02:50.308 CC test/rpc_client/rpc_client_test.o 00:02:50.308 TEST_HEADER include/spdk/accel.h 00:02:50.308 TEST_HEADER include/spdk/accel_module.h 00:02:50.308 CC app/trace_record/trace_record.o 00:02:50.308 CXX app/trace/trace.o 00:02:50.308 TEST_HEADER include/spdk/assert.h 00:02:50.308 TEST_HEADER include/spdk/barrier.h 00:02:50.308 TEST_HEADER include/spdk/base64.h 00:02:50.308 TEST_HEADER include/spdk/bdev.h 00:02:50.308 TEST_HEADER include/spdk/bdev_module.h 00:02:50.308 TEST_HEADER include/spdk/bdev_zone.h 00:02:50.308 TEST_HEADER include/spdk/bit_array.h 00:02:50.308 TEST_HEADER include/spdk/bit_pool.h 00:02:50.308 TEST_HEADER include/spdk/blob_bdev.h 00:02:50.308 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:50.308 TEST_HEADER include/spdk/blobfs.h 00:02:50.308 TEST_HEADER include/spdk/blob.h 00:02:50.308 TEST_HEADER include/spdk/conf.h 00:02:50.308 TEST_HEADER include/spdk/config.h 00:02:50.308 TEST_HEADER include/spdk/cpuset.h 00:02:50.308 TEST_HEADER include/spdk/crc16.h 00:02:50.308 TEST_HEADER include/spdk/crc32.h 00:02:50.308 TEST_HEADER include/spdk/crc64.h 00:02:50.308 TEST_HEADER include/spdk/dif.h 00:02:50.308 TEST_HEADER include/spdk/dma.h 00:02:50.308 TEST_HEADER include/spdk/endian.h 00:02:50.308 TEST_HEADER include/spdk/env_dpdk.h 00:02:50.308 TEST_HEADER include/spdk/env.h 00:02:50.308 TEST_HEADER include/spdk/event.h 00:02:50.308 TEST_HEADER include/spdk/fd_group.h 00:02:50.566 TEST_HEADER include/spdk/fd.h 00:02:50.566 TEST_HEADER include/spdk/file.h 00:02:50.566 TEST_HEADER include/spdk/fsdev.h 00:02:50.566 TEST_HEADER include/spdk/fsdev_module.h 00:02:50.566 TEST_HEADER include/spdk/ftl.h 00:02:50.566 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:50.566 TEST_HEADER include/spdk/gpt_spec.h 00:02:50.566 TEST_HEADER include/spdk/hexlify.h 00:02:50.566 CC examples/ioat/perf/perf.o 00:02:50.566 CC examples/util/zipf/zipf.o 00:02:50.566 TEST_HEADER include/spdk/histogram_data.h 00:02:50.566 CC test/thread/poller_perf/poller_perf.o 00:02:50.566 TEST_HEADER include/spdk/idxd.h 00:02:50.566 TEST_HEADER include/spdk/idxd_spec.h 00:02:50.566 TEST_HEADER include/spdk/init.h 00:02:50.566 TEST_HEADER include/spdk/ioat.h 00:02:50.566 TEST_HEADER include/spdk/ioat_spec.h 00:02:50.566 TEST_HEADER include/spdk/iscsi_spec.h 00:02:50.566 TEST_HEADER include/spdk/json.h 00:02:50.566 TEST_HEADER include/spdk/jsonrpc.h 00:02:50.566 TEST_HEADER include/spdk/keyring.h 00:02:50.566 TEST_HEADER include/spdk/keyring_module.h 00:02:50.566 TEST_HEADER include/spdk/likely.h 00:02:50.566 TEST_HEADER include/spdk/log.h 00:02:50.566 TEST_HEADER include/spdk/lvol.h 00:02:50.566 TEST_HEADER include/spdk/md5.h 00:02:50.566 TEST_HEADER include/spdk/memory.h 00:02:50.566 TEST_HEADER include/spdk/mmio.h 00:02:50.566 CC test/dma/test_dma/test_dma.o 00:02:50.566 TEST_HEADER include/spdk/nbd.h 00:02:50.566 CC test/app/bdev_svc/bdev_svc.o 00:02:50.566 TEST_HEADER include/spdk/net.h 00:02:50.566 TEST_HEADER include/spdk/notify.h 00:02:50.566 TEST_HEADER include/spdk/nvme.h 00:02:50.566 TEST_HEADER include/spdk/nvme_intel.h 00:02:50.566 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:50.566 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:50.566 TEST_HEADER include/spdk/nvme_spec.h 00:02:50.566 TEST_HEADER include/spdk/nvme_zns.h 00:02:50.566 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:50.566 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:50.566 TEST_HEADER include/spdk/nvmf.h 00:02:50.566 TEST_HEADER include/spdk/nvmf_spec.h 00:02:50.566 TEST_HEADER include/spdk/nvmf_transport.h 00:02:50.566 TEST_HEADER include/spdk/opal.h 00:02:50.566 TEST_HEADER include/spdk/opal_spec.h 00:02:50.566 TEST_HEADER include/spdk/pci_ids.h 00:02:50.566 TEST_HEADER include/spdk/pipe.h 00:02:50.566 TEST_HEADER include/spdk/queue.h 00:02:50.566 TEST_HEADER include/spdk/reduce.h 00:02:50.566 TEST_HEADER include/spdk/rpc.h 00:02:50.566 CC test/env/mem_callbacks/mem_callbacks.o 00:02:50.566 TEST_HEADER include/spdk/scheduler.h 00:02:50.566 TEST_HEADER include/spdk/scsi.h 00:02:50.566 TEST_HEADER include/spdk/scsi_spec.h 00:02:50.566 TEST_HEADER include/spdk/sock.h 00:02:50.566 TEST_HEADER include/spdk/stdinc.h 00:02:50.566 TEST_HEADER include/spdk/string.h 00:02:50.566 TEST_HEADER include/spdk/thread.h 00:02:50.566 TEST_HEADER include/spdk/trace.h 00:02:50.566 TEST_HEADER include/spdk/trace_parser.h 00:02:50.566 TEST_HEADER include/spdk/tree.h 00:02:50.566 TEST_HEADER include/spdk/ublk.h 00:02:50.566 TEST_HEADER include/spdk/util.h 00:02:50.566 TEST_HEADER include/spdk/uuid.h 00:02:50.566 TEST_HEADER include/spdk/version.h 00:02:50.566 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:50.566 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:50.566 TEST_HEADER include/spdk/vhost.h 00:02:50.566 TEST_HEADER include/spdk/vmd.h 00:02:50.566 TEST_HEADER include/spdk/xor.h 00:02:50.566 TEST_HEADER include/spdk/zipf.h 00:02:50.566 CXX test/cpp_headers/accel.o 00:02:50.566 LINK rpc_client_test 00:02:50.566 LINK poller_perf 00:02:50.566 LINK zipf 00:02:50.566 LINK spdk_trace_record 00:02:50.566 LINK bdev_svc 00:02:50.566 LINK spdk_trace 00:02:50.566 LINK ioat_perf 00:02:50.566 CXX test/cpp_headers/accel_module.o 00:02:50.824 CC examples/ioat/verify/verify.o 00:02:50.824 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:50.824 CXX test/cpp_headers/assert.o 00:02:50.824 CC test/event/event_perf/event_perf.o 00:02:50.824 CC app/nvmf_tgt/nvmf_main.o 00:02:50.824 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:50.824 CC examples/sock/hello_world/hello_sock.o 00:02:50.824 CC examples/thread/thread/thread_ex.o 00:02:50.824 CXX test/cpp_headers/barrier.o 00:02:50.824 LINK interrupt_tgt 00:02:50.824 LINK test_dma 00:02:50.824 LINK verify 00:02:51.084 LINK mem_callbacks 00:02:51.084 LINK event_perf 00:02:51.084 LINK nvmf_tgt 00:02:51.084 CXX test/cpp_headers/base64.o 00:02:51.084 CC test/event/reactor/reactor.o 00:02:51.084 LINK hello_sock 00:02:51.084 LINK thread 00:02:51.084 CC test/env/vtophys/vtophys.o 00:02:51.084 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:51.084 CXX test/cpp_headers/bdev.o 00:02:51.341 LINK reactor 00:02:51.341 CC examples/vmd/lsvmd/lsvmd.o 00:02:51.341 CC app/iscsi_tgt/iscsi_tgt.o 00:02:51.341 CC examples/idxd/perf/perf.o 00:02:51.341 LINK nvme_fuzz 00:02:51.341 LINK vtophys 00:02:51.341 LINK env_dpdk_post_init 00:02:51.341 LINK lsvmd 00:02:51.341 CC test/event/reactor_perf/reactor_perf.o 00:02:51.341 CXX test/cpp_headers/bdev_module.o 00:02:51.341 CC test/env/memory/memory_ut.o 00:02:51.341 CC test/env/pci/pci_ut.o 00:02:51.341 LINK iscsi_tgt 00:02:51.600 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:51.600 LINK reactor_perf 00:02:51.600 CXX test/cpp_headers/bdev_zone.o 00:02:51.600 CC examples/vmd/led/led.o 00:02:51.600 CC test/accel/dif/dif.o 00:02:51.600 LINK idxd_perf 00:02:51.600 CC examples/nvme/hello_world/hello_world.o 00:02:51.600 LINK led 00:02:51.600 CC test/event/app_repeat/app_repeat.o 00:02:51.600 CXX test/cpp_headers/bit_array.o 00:02:51.600 LINK pci_ut 00:02:51.600 CC app/spdk_tgt/spdk_tgt.o 00:02:51.859 CC examples/nvme/reconnect/reconnect.o 00:02:51.859 LINK hello_world 00:02:51.859 LINK app_repeat 00:02:51.859 CXX test/cpp_headers/bit_pool.o 00:02:51.859 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:51.859 LINK spdk_tgt 00:02:51.859 CXX test/cpp_headers/blob_bdev.o 00:02:51.859 CC examples/nvme/arbitration/arbitration.o 00:02:52.118 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:52.118 CC test/event/scheduler/scheduler.o 00:02:52.118 LINK reconnect 00:02:52.118 CXX test/cpp_headers/blobfs_bdev.o 00:02:52.118 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:52.118 CC app/spdk_lspci/spdk_lspci.o 00:02:52.397 CXX test/cpp_headers/blobfs.o 00:02:52.397 LINK scheduler 00:02:52.397 LINK dif 00:02:52.397 LINK spdk_lspci 00:02:52.397 LINK arbitration 00:02:52.397 CC examples/nvme/hotplug/hotplug.o 00:02:52.397 LINK nvme_manage 00:02:52.397 CXX test/cpp_headers/blob.o 00:02:52.397 LINK memory_ut 00:02:52.397 CXX test/cpp_headers/conf.o 00:02:52.397 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:52.397 CC app/spdk_nvme_perf/perf.o 00:02:52.397 CC app/spdk_nvme_identify/identify.o 00:02:52.655 LINK vhost_fuzz 00:02:52.655 CC test/app/histogram_perf/histogram_perf.o 00:02:52.655 CXX test/cpp_headers/config.o 00:02:52.655 LINK hotplug 00:02:52.655 CC test/app/jsoncat/jsoncat.o 00:02:52.655 CXX test/cpp_headers/cpuset.o 00:02:52.655 LINK cmb_copy 00:02:52.655 CXX test/cpp_headers/crc16.o 00:02:52.655 CC test/blobfs/mkfs/mkfs.o 00:02:52.655 LINK histogram_perf 00:02:52.655 CXX test/cpp_headers/crc32.o 00:02:52.655 LINK jsoncat 00:02:52.913 LINK iscsi_fuzz 00:02:52.913 CC test/app/stub/stub.o 00:02:52.913 CC app/spdk_nvme_discover/discovery_aer.o 00:02:52.914 LINK mkfs 00:02:52.914 CXX test/cpp_headers/crc64.o 00:02:52.914 CC examples/nvme/abort/abort.o 00:02:52.914 CC app/spdk_top/spdk_top.o 00:02:52.914 LINK stub 00:02:52.914 CC app/vhost/vhost.o 00:02:52.914 CXX test/cpp_headers/dif.o 00:02:52.914 CXX test/cpp_headers/dma.o 00:02:52.914 LINK spdk_nvme_discover 00:02:53.172 LINK vhost 00:02:53.172 CXX test/cpp_headers/endian.o 00:02:53.172 CXX test/cpp_headers/env_dpdk.o 00:02:53.172 LINK abort 00:02:53.172 LINK spdk_nvme_perf 00:02:53.172 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:53.172 CXX test/cpp_headers/env.o 00:02:53.172 CC test/lvol/esnap/esnap.o 00:02:53.172 CC app/spdk_dd/spdk_dd.o 00:02:53.172 CXX test/cpp_headers/event.o 00:02:53.430 LINK spdk_nvme_identify 00:02:53.430 LINK pmr_persistence 00:02:53.430 CC examples/accel/perf/accel_perf.o 00:02:53.430 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:53.430 CC examples/blob/hello_world/hello_blob.o 00:02:53.430 CXX test/cpp_headers/fd_group.o 00:02:53.430 CC test/nvme/aer/aer.o 00:02:53.430 CC test/nvme/reset/reset.o 00:02:53.691 CXX test/cpp_headers/fd.o 00:02:53.691 CC test/nvme/sgl/sgl.o 00:02:53.691 LINK spdk_dd 00:02:53.691 LINK hello_blob 00:02:53.691 LINK hello_fsdev 00:02:53.691 CXX test/cpp_headers/file.o 00:02:53.691 LINK reset 00:02:53.691 LINK aer 00:02:53.691 LINK spdk_top 00:02:53.949 LINK sgl 00:02:53.949 CC test/nvme/e2edp/nvme_dp.o 00:02:53.949 CXX test/cpp_headers/fsdev.o 00:02:53.949 LINK accel_perf 00:02:53.949 CC examples/blob/cli/blobcli.o 00:02:53.949 CC app/fio/nvme/fio_plugin.o 00:02:53.949 CC test/nvme/overhead/overhead.o 00:02:53.949 CXX test/cpp_headers/fsdev_module.o 00:02:53.949 CC test/nvme/err_injection/err_injection.o 00:02:53.949 CC test/nvme/startup/startup.o 00:02:53.949 CC test/bdev/bdevio/bdevio.o 00:02:54.206 CC test/nvme/reserve/reserve.o 00:02:54.206 CXX test/cpp_headers/ftl.o 00:02:54.206 LINK nvme_dp 00:02:54.206 LINK err_injection 00:02:54.206 LINK startup 00:02:54.206 LINK overhead 00:02:54.206 CXX test/cpp_headers/fuse_dispatcher.o 00:02:54.206 CC test/nvme/simple_copy/simple_copy.o 00:02:54.206 LINK blobcli 00:02:54.206 LINK reserve 00:02:54.206 CXX test/cpp_headers/gpt_spec.o 00:02:54.464 LINK spdk_nvme 00:02:54.464 CC test/nvme/connect_stress/connect_stress.o 00:02:54.464 CC app/fio/bdev/fio_plugin.o 00:02:54.464 CXX test/cpp_headers/hexlify.o 00:02:54.464 CXX test/cpp_headers/histogram_data.o 00:02:54.464 LINK bdevio 00:02:54.464 CXX test/cpp_headers/idxd.o 00:02:54.464 CC test/nvme/boot_partition/boot_partition.o 00:02:54.464 LINK simple_copy 00:02:54.464 CC examples/bdev/hello_world/hello_bdev.o 00:02:54.464 LINK connect_stress 00:02:54.721 CXX test/cpp_headers/idxd_spec.o 00:02:54.721 LINK boot_partition 00:02:54.721 CC test/nvme/compliance/nvme_compliance.o 00:02:54.721 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:54.721 CC test/nvme/fused_ordering/fused_ordering.o 00:02:54.721 CC test/nvme/fdp/fdp.o 00:02:54.721 LINK hello_bdev 00:02:54.721 CC examples/bdev/bdevperf/bdevperf.o 00:02:54.721 CXX test/cpp_headers/init.o 00:02:54.721 CXX test/cpp_headers/ioat.o 00:02:54.721 LINK doorbell_aers 00:02:54.721 LINK fused_ordering 00:02:54.721 CXX test/cpp_headers/ioat_spec.o 00:02:54.979 LINK nvme_compliance 00:02:54.979 LINK spdk_bdev 00:02:54.979 CXX test/cpp_headers/iscsi_spec.o 00:02:54.979 CXX test/cpp_headers/json.o 00:02:54.979 CXX test/cpp_headers/jsonrpc.o 00:02:54.979 CXX test/cpp_headers/keyring.o 00:02:54.979 LINK fdp 00:02:54.979 CC test/nvme/cuse/cuse.o 00:02:54.979 CXX test/cpp_headers/keyring_module.o 00:02:54.979 CXX test/cpp_headers/likely.o 00:02:54.979 CXX test/cpp_headers/log.o 00:02:54.979 CXX test/cpp_headers/lvol.o 00:02:54.979 CXX test/cpp_headers/md5.o 00:02:54.979 CXX test/cpp_headers/memory.o 00:02:54.979 CXX test/cpp_headers/mmio.o 00:02:55.236 CXX test/cpp_headers/nbd.o 00:02:55.237 CXX test/cpp_headers/net.o 00:02:55.237 CXX test/cpp_headers/notify.o 00:02:55.237 CXX test/cpp_headers/nvme.o 00:02:55.237 CXX test/cpp_headers/nvme_intel.o 00:02:55.237 CXX test/cpp_headers/nvme_ocssd.o 00:02:55.237 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:55.237 CXX test/cpp_headers/nvme_spec.o 00:02:55.237 CXX test/cpp_headers/nvme_zns.o 00:02:55.237 CXX test/cpp_headers/nvmf_cmd.o 00:02:55.237 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:55.495 CXX test/cpp_headers/nvmf.o 00:02:55.495 CXX test/cpp_headers/nvmf_spec.o 00:02:55.495 CXX test/cpp_headers/nvmf_transport.o 00:02:55.495 CXX test/cpp_headers/opal.o 00:02:55.495 LINK bdevperf 00:02:55.495 CXX test/cpp_headers/opal_spec.o 00:02:55.495 CXX test/cpp_headers/pci_ids.o 00:02:55.495 CXX test/cpp_headers/pipe.o 00:02:55.495 CXX test/cpp_headers/queue.o 00:02:55.495 CXX test/cpp_headers/reduce.o 00:02:55.495 CXX test/cpp_headers/rpc.o 00:02:55.495 CXX test/cpp_headers/scheduler.o 00:02:55.495 CXX test/cpp_headers/scsi.o 00:02:55.495 CXX test/cpp_headers/scsi_spec.o 00:02:55.495 CXX test/cpp_headers/sock.o 00:02:55.495 CXX test/cpp_headers/stdinc.o 00:02:55.753 CXX test/cpp_headers/string.o 00:02:55.753 CXX test/cpp_headers/thread.o 00:02:55.753 CXX test/cpp_headers/trace_parser.o 00:02:55.753 CXX test/cpp_headers/trace.o 00:02:55.753 CXX test/cpp_headers/tree.o 00:02:55.753 CXX test/cpp_headers/ublk.o 00:02:55.754 CXX test/cpp_headers/util.o 00:02:55.754 CXX test/cpp_headers/uuid.o 00:02:55.754 CXX test/cpp_headers/version.o 00:02:55.754 CC examples/nvmf/nvmf/nvmf.o 00:02:55.754 CXX test/cpp_headers/vfio_user_pci.o 00:02:55.754 CXX test/cpp_headers/vfio_user_spec.o 00:02:55.754 CXX test/cpp_headers/vhost.o 00:02:55.754 CXX test/cpp_headers/vmd.o 00:02:55.754 CXX test/cpp_headers/xor.o 00:02:55.754 CXX test/cpp_headers/zipf.o 00:02:56.012 LINK cuse 00:02:56.012 LINK nvmf 00:02:57.913 LINK esnap 00:02:58.171 00:02:58.171 real 1m4.789s 00:02:58.171 user 6m5.486s 00:02:58.171 sys 1m5.864s 00:02:58.171 01:15:54 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:58.171 01:15:54 make -- common/autotest_common.sh@10 -- $ set +x 00:02:58.171 ************************************ 00:02:58.171 END TEST make 00:02:58.171 ************************************ 00:02:58.171 01:15:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:58.171 01:15:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:58.171 01:15:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:58.171 01:15:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.171 01:15:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:58.171 01:15:54 -- pm/common@44 -- $ pid=5062 00:02:58.171 01:15:54 -- pm/common@50 -- $ kill -TERM 5062 00:02:58.171 01:15:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.171 01:15:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:58.171 01:15:54 -- pm/common@44 -- $ pid=5063 00:02:58.171 01:15:54 -- pm/common@50 -- $ kill -TERM 5063 00:02:58.431 01:15:54 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:58.431 01:15:54 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:58.431 01:15:54 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:58.431 01:15:54 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:58.431 01:15:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:58.431 01:15:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:58.431 01:15:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:58.431 01:15:54 -- scripts/common.sh@336 -- # IFS=.-: 00:02:58.431 01:15:54 -- scripts/common.sh@336 -- # read -ra ver1 00:02:58.431 01:15:54 -- scripts/common.sh@337 -- # IFS=.-: 00:02:58.431 01:15:54 -- scripts/common.sh@337 -- # read -ra ver2 00:02:58.431 01:15:54 -- scripts/common.sh@338 -- # local 'op=<' 00:02:58.431 01:15:54 -- scripts/common.sh@340 -- # ver1_l=2 00:02:58.431 01:15:54 -- scripts/common.sh@341 -- # ver2_l=1 00:02:58.431 01:15:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:58.431 01:15:54 -- scripts/common.sh@344 -- # case "$op" in 00:02:58.431 01:15:54 -- scripts/common.sh@345 -- # : 1 00:02:58.431 01:15:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:58.431 01:15:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.431 01:15:54 -- scripts/common.sh@365 -- # decimal 1 00:02:58.431 01:15:54 -- scripts/common.sh@353 -- # local d=1 00:02:58.431 01:15:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:58.431 01:15:54 -- scripts/common.sh@355 -- # echo 1 00:02:58.431 01:15:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:58.431 01:15:54 -- scripts/common.sh@366 -- # decimal 2 00:02:58.431 01:15:54 -- scripts/common.sh@353 -- # local d=2 00:02:58.431 01:15:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:58.431 01:15:54 -- scripts/common.sh@355 -- # echo 2 00:02:58.431 01:15:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:58.431 01:15:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:58.431 01:15:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:58.431 01:15:54 -- scripts/common.sh@368 -- # return 0 00:02:58.431 01:15:54 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:58.431 01:15:54 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:58.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.431 --rc genhtml_branch_coverage=1 00:02:58.431 --rc genhtml_function_coverage=1 00:02:58.431 --rc genhtml_legend=1 00:02:58.431 --rc geninfo_all_blocks=1 00:02:58.431 --rc geninfo_unexecuted_blocks=1 00:02:58.431 00:02:58.431 ' 00:02:58.431 01:15:54 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:58.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.431 --rc genhtml_branch_coverage=1 00:02:58.431 --rc genhtml_function_coverage=1 00:02:58.431 --rc genhtml_legend=1 00:02:58.431 --rc geninfo_all_blocks=1 00:02:58.431 --rc geninfo_unexecuted_blocks=1 00:02:58.431 00:02:58.431 ' 00:02:58.431 01:15:54 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:58.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.431 --rc genhtml_branch_coverage=1 00:02:58.431 --rc genhtml_function_coverage=1 00:02:58.431 --rc genhtml_legend=1 00:02:58.431 --rc geninfo_all_blocks=1 00:02:58.431 --rc geninfo_unexecuted_blocks=1 00:02:58.431 00:02:58.431 ' 00:02:58.431 01:15:54 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:58.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.431 --rc genhtml_branch_coverage=1 00:02:58.431 --rc genhtml_function_coverage=1 00:02:58.431 --rc genhtml_legend=1 00:02:58.431 --rc geninfo_all_blocks=1 00:02:58.431 --rc geninfo_unexecuted_blocks=1 00:02:58.431 00:02:58.431 ' 00:02:58.431 01:15:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:58.431 01:15:54 -- nvmf/common.sh@7 -- # uname -s 00:02:58.431 01:15:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:58.431 01:15:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:58.431 01:15:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:58.431 01:15:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:58.431 01:15:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:58.431 01:15:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:58.431 01:15:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:58.431 01:15:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:58.431 01:15:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:58.431 01:15:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:58.431 01:15:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9a58713b-7f88-4222-8855-63517e9111a3 00:02:58.431 01:15:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=9a58713b-7f88-4222-8855-63517e9111a3 00:02:58.431 01:15:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:58.431 01:15:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:58.431 01:15:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:58.431 01:15:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:58.431 01:15:54 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:58.431 01:15:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:58.431 01:15:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:58.431 01:15:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.431 01:15:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.431 01:15:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.432 01:15:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.432 01:15:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.432 01:15:54 -- paths/export.sh@5 -- # export PATH 00:02:58.432 01:15:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.432 01:15:54 -- nvmf/common.sh@51 -- # : 0 00:02:58.432 01:15:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:58.432 01:15:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:58.432 01:15:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:58.432 01:15:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:58.432 01:15:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:58.432 01:15:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:58.432 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:58.432 01:15:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:58.432 01:15:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:58.432 01:15:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:58.432 01:15:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:58.432 01:15:54 -- spdk/autotest.sh@32 -- # uname -s 00:02:58.432 01:15:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:58.432 01:15:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:58.432 01:15:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:58.432 01:15:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:58.432 01:15:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:58.432 01:15:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:58.432 01:15:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:58.432 01:15:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:58.432 01:15:54 -- spdk/autotest.sh@48 -- # udevadm_pid=54580 00:02:58.432 01:15:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:58.432 01:15:54 -- pm/common@17 -- # local monitor 00:02:58.432 01:15:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.432 01:15:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:58.432 01:15:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.432 01:15:54 -- pm/common@25 -- # sleep 1 00:02:58.432 01:15:54 -- pm/common@21 -- # date +%s 00:02:58.432 01:15:54 -- pm/common@21 -- # date +%s 00:02:58.432 01:15:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727486154 00:02:58.432 01:15:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727486154 00:02:58.432 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727486154_collect-vmstat.pm.log 00:02:58.432 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727486154_collect-cpu-load.pm.log 00:02:59.367 01:15:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:59.367 01:15:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:59.367 01:15:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:59.367 01:15:55 -- common/autotest_common.sh@10 -- # set +x 00:02:59.367 01:15:55 -- spdk/autotest.sh@59 -- # create_test_list 00:02:59.367 01:15:55 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:59.367 01:15:55 -- common/autotest_common.sh@10 -- # set +x 00:02:59.626 01:15:55 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:59.626 01:15:55 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:59.626 01:15:55 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:59.626 01:15:55 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:59.626 01:15:55 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:59.626 01:15:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:59.626 01:15:55 -- common/autotest_common.sh@1455 -- # uname 00:02:59.626 01:15:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:59.626 01:15:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:59.626 01:15:55 -- common/autotest_common.sh@1475 -- # uname 00:02:59.626 01:15:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:59.626 01:15:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:59.626 01:15:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:59.626 lcov: LCOV version 1.15 00:02:59.626 01:15:55 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:14.523 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:14.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:29.426 01:16:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:29.426 01:16:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:29.426 01:16:22 -- common/autotest_common.sh@10 -- # set +x 00:03:29.426 01:16:22 -- spdk/autotest.sh@78 -- # rm -f 00:03:29.426 01:16:22 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:29.427 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:29.427 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:29.427 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:29.427 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:29.427 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:29.427 01:16:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:29.427 01:16:23 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:29.427 01:16:23 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:29.427 01:16:23 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:29.427 01:16:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.427 01:16:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:29.427 01:16:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:29.427 01:16:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.427 01:16:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1c1n1 00:03:29.427 01:16:23 -- common/autotest_common.sh@1648 -- # local device=nvme1c1n1 00:03:29.427 01:16:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.427 01:16:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:29.427 01:16:23 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:29.427 01:16:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.427 01:16:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:29.427 01:16:23 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:29.427 01:16:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.427 01:16:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:03:29.427 01:16:23 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:03:29.427 01:16:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.427 01:16:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n2 00:03:29.427 01:16:23 -- common/autotest_common.sh@1648 -- # local device=nvme3n2 00:03:29.427 01:16:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:29.427 01:16:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n3 00:03:29.427 01:16:23 -- common/autotest_common.sh@1648 -- # local device=nvme3n3 00:03:29.427 01:16:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:03:29.427 01:16:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:29.427 01:16:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:29.427 01:16:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:29.427 01:16:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:29.427 01:16:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:29.427 01:16:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:29.427 01:16:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:29.427 No valid GPT data, bailing 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # pt= 00:03:29.427 01:16:23 -- scripts/common.sh@395 -- # return 1 00:03:29.427 01:16:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:29.427 1+0 records in 00:03:29.427 1+0 records out 00:03:29.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105833 s, 99.1 MB/s 00:03:29.427 01:16:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:29.427 01:16:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:29.427 01:16:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:29.427 01:16:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:29.427 01:16:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:29.427 No valid GPT data, bailing 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # pt= 00:03:29.427 01:16:23 -- scripts/common.sh@395 -- # return 1 00:03:29.427 01:16:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:29.427 1+0 records in 00:03:29.427 1+0 records out 00:03:29.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420461 s, 249 MB/s 00:03:29.427 01:16:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:29.427 01:16:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:29.427 01:16:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:29.427 01:16:23 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:29.427 01:16:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:29.427 No valid GPT data, bailing 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # pt= 00:03:29.427 01:16:23 -- scripts/common.sh@395 -- # return 1 00:03:29.427 01:16:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:29.427 1+0 records in 00:03:29.427 1+0 records out 00:03:29.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00350665 s, 299 MB/s 00:03:29.427 01:16:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:29.427 01:16:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:29.427 01:16:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:29.427 01:16:23 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:29.427 01:16:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:29.427 No valid GPT data, bailing 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # pt= 00:03:29.427 01:16:23 -- scripts/common.sh@395 -- # return 1 00:03:29.427 01:16:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:29.427 1+0 records in 00:03:29.427 1+0 records out 00:03:29.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00409572 s, 256 MB/s 00:03:29.427 01:16:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:29.427 01:16:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:29.427 01:16:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:03:29.427 01:16:23 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:03:29.427 01:16:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:03:29.427 No valid GPT data, bailing 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # pt= 00:03:29.427 01:16:23 -- scripts/common.sh@395 -- # return 1 00:03:29.427 01:16:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:03:29.427 1+0 records in 00:03:29.427 1+0 records out 00:03:29.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501765 s, 209 MB/s 00:03:29.427 01:16:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:29.427 01:16:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:29.427 01:16:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:03:29.427 01:16:23 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:03:29.427 01:16:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:03:29.427 No valid GPT data, bailing 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:03:29.427 01:16:23 -- scripts/common.sh@394 -- # pt= 00:03:29.427 01:16:23 -- scripts/common.sh@395 -- # return 1 00:03:29.427 01:16:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:03:29.427 1+0 records in 00:03:29.427 1+0 records out 00:03:29.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433184 s, 242 MB/s 00:03:29.427 01:16:23 -- spdk/autotest.sh@105 -- # sync 00:03:29.427 01:16:24 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:29.427 01:16:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:29.427 01:16:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:29.685 01:16:25 -- spdk/autotest.sh@111 -- # uname -s 00:03:29.685 01:16:25 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:29.685 01:16:25 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:29.685 01:16:25 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:30.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:30.510 Hugepages 00:03:30.510 node hugesize free / total 00:03:30.510 node0 1048576kB 0 / 0 00:03:30.510 node0 2048kB 0 / 0 00:03:30.510 00:03:30.510 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:30.510 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:30.510 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:30.769 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:30.769 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:03:30.769 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:30.769 01:16:26 -- spdk/autotest.sh@117 -- # uname -s 00:03:30.769 01:16:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:30.769 01:16:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:30.769 01:16:26 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:31.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:31.596 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:31.596 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:31.596 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:31.854 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:31.855 01:16:27 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:32.791 01:16:28 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:32.791 01:16:28 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:32.791 01:16:28 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:32.791 01:16:28 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:32.791 01:16:28 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:32.791 01:16:28 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:32.791 01:16:28 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.791 01:16:28 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:32.791 01:16:28 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:32.791 01:16:28 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:03:32.791 01:16:28 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:32.791 01:16:28 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:33.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.308 Waiting for block devices as requested 00:03:33.309 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:33.309 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:33.309 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:33.567 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:03:38.923 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:03:38.923 01:16:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:38.923 01:16:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:38.924 01:16:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:38.924 01:16:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:03:38.924 01:16:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:03:38.924 01:16:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:38.924 01:16:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:38.924 01:16:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:38.924 01:16:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1541 -- # continue 00:03:38.924 01:16:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:38.924 01:16:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:38.924 01:16:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:38.924 01:16:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:38.924 01:16:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:38.924 01:16:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:38.924 01:16:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:38.924 01:16:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:38.924 01:16:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1541 -- # continue 00:03:38.924 01:16:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:38.924 01:16:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:38.924 01:16:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:38.924 01:16:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:38.924 01:16:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1541 -- # continue 00:03:38.924 01:16:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:38.924 01:16:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:38.924 01:16:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:38.924 01:16:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:03:38.924 01:16:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:03:38.924 01:16:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:03:38.924 01:16:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:03:38.924 01:16:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:38.924 01:16:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:38.924 01:16:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:38.924 01:16:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:38.924 01:16:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:38.924 01:16:34 -- common/autotest_common.sh@1541 -- # continue 00:03:38.924 01:16:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:38.924 01:16:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:38.924 01:16:34 -- common/autotest_common.sh@10 -- # set +x 00:03:38.924 01:16:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:38.924 01:16:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:38.924 01:16:34 -- common/autotest_common.sh@10 -- # set +x 00:03:38.924 01:16:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:39.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:39.748 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.749 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.749 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.749 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.749 01:16:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:39.749 01:16:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:39.749 01:16:35 -- common/autotest_common.sh@10 -- # set +x 00:03:39.749 01:16:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:39.749 01:16:35 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:39.749 01:16:35 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:39.749 01:16:35 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:39.749 01:16:35 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:39.749 01:16:35 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:39.749 01:16:35 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:39.749 01:16:35 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:39.749 01:16:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:39.749 01:16:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:39.749 01:16:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:39.749 01:16:35 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:39.749 01:16:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:39.749 01:16:35 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:03:39.749 01:16:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:39.749 01:16:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:39.749 01:16:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:39.749 01:16:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:39.749 01:16:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:39.749 01:16:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:39.749 01:16:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:39.749 01:16:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:39.749 01:16:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:39.749 01:16:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:40.010 01:16:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:03:40.010 01:16:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:40.010 01:16:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:40.010 01:16:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:40.010 01:16:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:03:40.010 01:16:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:40.010 01:16:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:40.010 01:16:35 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:40.010 01:16:35 -- common/autotest_common.sh@1570 -- # return 0 00:03:40.010 01:16:35 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:40.010 01:16:35 -- common/autotest_common.sh@1578 -- # return 0 00:03:40.010 01:16:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:40.010 01:16:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:40.010 01:16:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.010 01:16:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.010 01:16:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:40.010 01:16:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.010 01:16:35 -- common/autotest_common.sh@10 -- # set +x 00:03:40.010 01:16:35 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:40.010 01:16:35 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:40.010 01:16:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.010 01:16:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.010 01:16:35 -- common/autotest_common.sh@10 -- # set +x 00:03:40.010 ************************************ 00:03:40.010 START TEST env 00:03:40.010 ************************************ 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:40.010 * Looking for test storage... 00:03:40.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1681 -- # lcov --version 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:40.010 01:16:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.010 01:16:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.010 01:16:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.010 01:16:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.010 01:16:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.010 01:16:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.010 01:16:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.010 01:16:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.010 01:16:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.010 01:16:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.010 01:16:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.010 01:16:35 env -- scripts/common.sh@344 -- # case "$op" in 00:03:40.010 01:16:35 env -- scripts/common.sh@345 -- # : 1 00:03:40.010 01:16:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.010 01:16:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.010 01:16:35 env -- scripts/common.sh@365 -- # decimal 1 00:03:40.010 01:16:35 env -- scripts/common.sh@353 -- # local d=1 00:03:40.010 01:16:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.010 01:16:35 env -- scripts/common.sh@355 -- # echo 1 00:03:40.010 01:16:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.010 01:16:35 env -- scripts/common.sh@366 -- # decimal 2 00:03:40.010 01:16:35 env -- scripts/common.sh@353 -- # local d=2 00:03:40.010 01:16:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.010 01:16:35 env -- scripts/common.sh@355 -- # echo 2 00:03:40.010 01:16:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.010 01:16:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.010 01:16:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.010 01:16:35 env -- scripts/common.sh@368 -- # return 0 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:40.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.010 --rc genhtml_branch_coverage=1 00:03:40.010 --rc genhtml_function_coverage=1 00:03:40.010 --rc genhtml_legend=1 00:03:40.010 --rc geninfo_all_blocks=1 00:03:40.010 --rc geninfo_unexecuted_blocks=1 00:03:40.010 00:03:40.010 ' 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:40.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.010 --rc genhtml_branch_coverage=1 00:03:40.010 --rc genhtml_function_coverage=1 00:03:40.010 --rc genhtml_legend=1 00:03:40.010 --rc geninfo_all_blocks=1 00:03:40.010 --rc geninfo_unexecuted_blocks=1 00:03:40.010 00:03:40.010 ' 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:40.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.010 --rc genhtml_branch_coverage=1 00:03:40.010 --rc genhtml_function_coverage=1 00:03:40.010 --rc genhtml_legend=1 00:03:40.010 --rc geninfo_all_blocks=1 00:03:40.010 --rc geninfo_unexecuted_blocks=1 00:03:40.010 00:03:40.010 ' 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:40.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.010 --rc genhtml_branch_coverage=1 00:03:40.010 --rc genhtml_function_coverage=1 00:03:40.010 --rc genhtml_legend=1 00:03:40.010 --rc geninfo_all_blocks=1 00:03:40.010 --rc geninfo_unexecuted_blocks=1 00:03:40.010 00:03:40.010 ' 00:03:40.010 01:16:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.010 01:16:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.010 01:16:35 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.010 ************************************ 00:03:40.010 START TEST env_memory 00:03:40.010 ************************************ 00:03:40.010 01:16:35 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:40.010 00:03:40.010 00:03:40.010 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.010 http://cunit.sourceforge.net/ 00:03:40.010 00:03:40.010 00:03:40.010 Suite: memory 00:03:40.010 Test: alloc and free memory map ...[2024-09-28 01:16:35.907323] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:40.010 passed 00:03:40.273 Test: mem map translation ...[2024-09-28 01:16:35.946505] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:40.273 [2024-09-28 01:16:35.946678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:40.273 [2024-09-28 01:16:35.946777] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:40.273 [2024-09-28 01:16:35.946827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:40.273 passed 00:03:40.273 Test: mem map registration ...[2024-09-28 01:16:36.014819] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:40.273 [2024-09-28 01:16:36.014983] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:40.273 passed 00:03:40.273 Test: mem map adjacent registrations ...passed 00:03:40.273 00:03:40.273 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.273 suites 1 1 n/a 0 0 00:03:40.273 tests 4 4 4 0 0 00:03:40.273 asserts 152 152 152 0 n/a 00:03:40.273 00:03:40.273 Elapsed time = 0.233 seconds 00:03:40.273 00:03:40.273 real 0m0.268s 00:03:40.273 user 0m0.234s 00:03:40.273 sys 0m0.027s 00:03:40.273 01:16:36 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.273 ************************************ 00:03:40.273 END TEST env_memory 00:03:40.273 ************************************ 00:03:40.273 01:16:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:40.273 01:16:36 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:40.273 01:16:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.273 01:16:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.273 01:16:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.273 ************************************ 00:03:40.273 START TEST env_vtophys 00:03:40.273 ************************************ 00:03:40.273 01:16:36 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:40.533 EAL: lib.eal log level changed from notice to debug 00:03:40.533 EAL: Detected lcore 0 as core 0 on socket 0 00:03:40.533 EAL: Detected lcore 1 as core 0 on socket 0 00:03:40.533 EAL: Detected lcore 2 as core 0 on socket 0 00:03:40.533 EAL: Detected lcore 3 as core 0 on socket 0 00:03:40.533 EAL: Detected lcore 4 as core 0 on socket 0 00:03:40.533 EAL: Detected lcore 5 as core 0 on socket 0 00:03:40.533 EAL: Detected lcore 6 as core 0 on socket 0 00:03:40.533 EAL: Detected lcore 7 as core 0 on socket 0 00:03:40.533 EAL: Detected lcore 8 as core 0 on socket 0 00:03:40.533 EAL: Detected lcore 9 as core 0 on socket 0 00:03:40.533 EAL: Maximum logical cores by configuration: 128 00:03:40.533 EAL: Detected CPU lcores: 10 00:03:40.533 EAL: Detected NUMA nodes: 1 00:03:40.533 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:40.533 EAL: Detected shared linkage of DPDK 00:03:40.533 EAL: No shared files mode enabled, IPC will be disabled 00:03:40.533 EAL: Selected IOVA mode 'PA' 00:03:40.533 EAL: Probing VFIO support... 00:03:40.533 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:40.533 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:40.533 EAL: Ask a virtual area of 0x2e000 bytes 00:03:40.533 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:40.533 EAL: Setting up physically contiguous memory... 00:03:40.533 EAL: Setting maximum number of open files to 524288 00:03:40.533 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:40.533 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:40.533 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.533 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:40.533 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.533 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.533 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:40.533 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:40.533 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.533 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:40.533 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.533 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.533 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:40.533 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:40.533 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.533 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:40.533 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.533 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.533 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:40.533 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:40.533 EAL: Ask a virtual area of 0x61000 bytes 00:03:40.533 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:40.533 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:40.533 EAL: Ask a virtual area of 0x400000000 bytes 00:03:40.533 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:40.533 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:40.533 EAL: Hugepages will be freed exactly as allocated. 00:03:40.533 EAL: No shared files mode enabled, IPC is disabled 00:03:40.533 EAL: No shared files mode enabled, IPC is disabled 00:03:40.533 EAL: TSC frequency is ~2600000 KHz 00:03:40.533 EAL: Main lcore 0 is ready (tid=7f399e6dea40;cpuset=[0]) 00:03:40.533 EAL: Trying to obtain current memory policy. 00:03:40.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.533 EAL: Restoring previous memory policy: 0 00:03:40.533 EAL: request: mp_malloc_sync 00:03:40.533 EAL: No shared files mode enabled, IPC is disabled 00:03:40.533 EAL: Heap on socket 0 was expanded by 2MB 00:03:40.533 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:40.533 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:40.533 EAL: Mem event callback 'spdk:(nil)' registered 00:03:40.533 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:40.533 00:03:40.533 00:03:40.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.533 http://cunit.sourceforge.net/ 00:03:40.533 00:03:40.533 00:03:40.533 Suite: components_suite 00:03:40.792 Test: vtophys_malloc_test ...passed 00:03:40.792 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:40.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.792 EAL: Restoring previous memory policy: 4 00:03:40.792 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.792 EAL: request: mp_malloc_sync 00:03:40.792 EAL: No shared files mode enabled, IPC is disabled 00:03:40.792 EAL: Heap on socket 0 was expanded by 4MB 00:03:40.792 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.792 EAL: request: mp_malloc_sync 00:03:40.792 EAL: No shared files mode enabled, IPC is disabled 00:03:40.792 EAL: Heap on socket 0 was shrunk by 4MB 00:03:40.792 EAL: Trying to obtain current memory policy. 00:03:40.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.792 EAL: Restoring previous memory policy: 4 00:03:40.792 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.792 EAL: request: mp_malloc_sync 00:03:40.792 EAL: No shared files mode enabled, IPC is disabled 00:03:40.792 EAL: Heap on socket 0 was expanded by 6MB 00:03:40.792 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.792 EAL: request: mp_malloc_sync 00:03:40.792 EAL: No shared files mode enabled, IPC is disabled 00:03:40.792 EAL: Heap on socket 0 was shrunk by 6MB 00:03:40.792 EAL: Trying to obtain current memory policy. 00:03:40.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.792 EAL: Restoring previous memory policy: 4 00:03:40.792 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.792 EAL: request: mp_malloc_sync 00:03:40.792 EAL: No shared files mode enabled, IPC is disabled 00:03:40.792 EAL: Heap on socket 0 was expanded by 10MB 00:03:40.792 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.792 EAL: request: mp_malloc_sync 00:03:40.792 EAL: No shared files mode enabled, IPC is disabled 00:03:40.792 EAL: Heap on socket 0 was shrunk by 10MB 00:03:40.792 EAL: Trying to obtain current memory policy. 00:03:40.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.792 EAL: Restoring previous memory policy: 4 00:03:40.792 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.793 EAL: request: mp_malloc_sync 00:03:40.793 EAL: No shared files mode enabled, IPC is disabled 00:03:40.793 EAL: Heap on socket 0 was expanded by 18MB 00:03:40.793 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.793 EAL: request: mp_malloc_sync 00:03:40.793 EAL: No shared files mode enabled, IPC is disabled 00:03:40.793 EAL: Heap on socket 0 was shrunk by 18MB 00:03:40.793 EAL: Trying to obtain current memory policy. 00:03:40.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:40.793 EAL: Restoring previous memory policy: 4 00:03:40.793 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.793 EAL: request: mp_malloc_sync 00:03:40.793 EAL: No shared files mode enabled, IPC is disabled 00:03:40.793 EAL: Heap on socket 0 was expanded by 34MB 00:03:40.793 EAL: Calling mem event callback 'spdk:(nil)' 00:03:40.793 EAL: request: mp_malloc_sync 00:03:40.793 EAL: No shared files mode enabled, IPC is disabled 00:03:40.793 EAL: Heap on socket 0 was shrunk by 34MB 00:03:41.050 EAL: Trying to obtain current memory policy. 00:03:41.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.050 EAL: Restoring previous memory policy: 4 00:03:41.050 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.050 EAL: request: mp_malloc_sync 00:03:41.050 EAL: No shared files mode enabled, IPC is disabled 00:03:41.050 EAL: Heap on socket 0 was expanded by 66MB 00:03:41.050 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.050 EAL: request: mp_malloc_sync 00:03:41.050 EAL: No shared files mode enabled, IPC is disabled 00:03:41.050 EAL: Heap on socket 0 was shrunk by 66MB 00:03:41.050 EAL: Trying to obtain current memory policy. 00:03:41.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.050 EAL: Restoring previous memory policy: 4 00:03:41.050 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.050 EAL: request: mp_malloc_sync 00:03:41.050 EAL: No shared files mode enabled, IPC is disabled 00:03:41.050 EAL: Heap on socket 0 was expanded by 130MB 00:03:41.050 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.308 EAL: request: mp_malloc_sync 00:03:41.308 EAL: No shared files mode enabled, IPC is disabled 00:03:41.308 EAL: Heap on socket 0 was shrunk by 130MB 00:03:41.308 EAL: Trying to obtain current memory policy. 00:03:41.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.308 EAL: Restoring previous memory policy: 4 00:03:41.308 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.308 EAL: request: mp_malloc_sync 00:03:41.308 EAL: No shared files mode enabled, IPC is disabled 00:03:41.308 EAL: Heap on socket 0 was expanded by 258MB 00:03:41.581 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.581 EAL: request: mp_malloc_sync 00:03:41.581 EAL: No shared files mode enabled, IPC is disabled 00:03:41.581 EAL: Heap on socket 0 was shrunk by 258MB 00:03:41.894 EAL: Trying to obtain current memory policy. 00:03:41.894 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.894 EAL: Restoring previous memory policy: 4 00:03:41.894 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.894 EAL: request: mp_malloc_sync 00:03:41.894 EAL: No shared files mode enabled, IPC is disabled 00:03:41.894 EAL: Heap on socket 0 was expanded by 514MB 00:03:42.461 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.461 EAL: request: mp_malloc_sync 00:03:42.461 EAL: No shared files mode enabled, IPC is disabled 00:03:42.461 EAL: Heap on socket 0 was shrunk by 514MB 00:03:42.720 EAL: Trying to obtain current memory policy. 00:03:42.720 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.978 EAL: Restoring previous memory policy: 4 00:03:42.978 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.978 EAL: request: mp_malloc_sync 00:03:42.978 EAL: No shared files mode enabled, IPC is disabled 00:03:42.978 EAL: Heap on socket 0 was expanded by 1026MB 00:03:43.910 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.910 EAL: request: mp_malloc_sync 00:03:43.910 EAL: No shared files mode enabled, IPC is disabled 00:03:43.910 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:44.472 passed 00:03:44.472 00:03:44.472 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.472 suites 1 1 n/a 0 0 00:03:44.472 tests 2 2 2 0 0 00:03:44.472 asserts 5824 5824 5824 0 n/a 00:03:44.472 00:03:44.472 Elapsed time = 3.975 seconds 00:03:44.472 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.472 EAL: request: mp_malloc_sync 00:03:44.472 EAL: No shared files mode enabled, IPC is disabled 00:03:44.472 EAL: Heap on socket 0 was shrunk by 2MB 00:03:44.472 EAL: No shared files mode enabled, IPC is disabled 00:03:44.472 EAL: No shared files mode enabled, IPC is disabled 00:03:44.472 EAL: No shared files mode enabled, IPC is disabled 00:03:44.730 00:03:44.730 real 0m4.223s 00:03:44.730 user 0m3.508s 00:03:44.730 sys 0m0.575s 00:03:44.730 01:16:40 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.730 01:16:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:44.730 ************************************ 00:03:44.730 END TEST env_vtophys 00:03:44.730 ************************************ 00:03:44.730 01:16:40 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:44.730 01:16:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.730 01:16:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.730 01:16:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.730 ************************************ 00:03:44.730 START TEST env_pci 00:03:44.730 ************************************ 00:03:44.730 01:16:40 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:44.730 00:03:44.730 00:03:44.730 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.730 http://cunit.sourceforge.net/ 00:03:44.730 00:03:44.730 00:03:44.730 Suite: pci 00:03:44.730 Test: pci_hook ...[2024-09-28 01:16:40.468098] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57335 has claimed it 00:03:44.730 passed 00:03:44.730 00:03:44.730 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.730 suites 1 1 n/a 0 0 00:03:44.730 tests 1 1 1 0 0 00:03:44.730 asserts 25 25 25 0 n/a 00:03:44.730 00:03:44.730 Elapsed time = 0.007 seconds 00:03:44.730 EAL: Cannot find device (10000:00:01.0) 00:03:44.730 EAL: Failed to attach device on primary process 00:03:44.730 00:03:44.730 real 0m0.059s 00:03:44.730 user 0m0.029s 00:03:44.730 sys 0m0.028s 00:03:44.730 01:16:40 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.730 01:16:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:44.730 ************************************ 00:03:44.730 END TEST env_pci 00:03:44.730 ************************************ 00:03:44.730 01:16:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:44.730 01:16:40 env -- env/env.sh@15 -- # uname 00:03:44.730 01:16:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:44.730 01:16:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:44.730 01:16:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.730 01:16:40 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:44.730 01:16:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.730 01:16:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.730 ************************************ 00:03:44.730 START TEST env_dpdk_post_init 00:03:44.730 ************************************ 00:03:44.730 01:16:40 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.730 EAL: Detected CPU lcores: 10 00:03:44.730 EAL: Detected NUMA nodes: 1 00:03:44.730 EAL: Detected shared linkage of DPDK 00:03:44.730 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.730 EAL: Selected IOVA mode 'PA' 00:03:44.986 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.986 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:44.986 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:44.986 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:03:44.986 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:03:44.986 Starting DPDK initialization... 00:03:44.986 Starting SPDK post initialization... 00:03:44.986 SPDK NVMe probe 00:03:44.986 Attaching to 0000:00:10.0 00:03:44.986 Attaching to 0000:00:11.0 00:03:44.986 Attaching to 0000:00:12.0 00:03:44.986 Attaching to 0000:00:13.0 00:03:44.986 Attached to 0000:00:10.0 00:03:44.986 Attached to 0000:00:11.0 00:03:44.986 Attached to 0000:00:13.0 00:03:44.986 Attached to 0000:00:12.0 00:03:44.986 Cleaning up... 00:03:44.986 00:03:44.986 real 0m0.228s 00:03:44.986 user 0m0.069s 00:03:44.986 sys 0m0.062s 00:03:44.986 ************************************ 00:03:44.986 END TEST env_dpdk_post_init 00:03:44.986 ************************************ 00:03:44.986 01:16:40 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.986 01:16:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:44.986 01:16:40 env -- env/env.sh@26 -- # uname 00:03:44.986 01:16:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:44.986 01:16:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:44.986 01:16:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.986 01:16:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.986 01:16:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.986 ************************************ 00:03:44.986 START TEST env_mem_callbacks 00:03:44.986 ************************************ 00:03:44.986 01:16:40 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:44.986 EAL: Detected CPU lcores: 10 00:03:44.986 EAL: Detected NUMA nodes: 1 00:03:44.986 EAL: Detected shared linkage of DPDK 00:03:44.986 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.986 EAL: Selected IOVA mode 'PA' 00:03:45.243 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.243 00:03:45.243 00:03:45.243 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.243 http://cunit.sourceforge.net/ 00:03:45.243 00:03:45.243 00:03:45.243 Suite: memory 00:03:45.243 Test: test ... 00:03:45.243 register 0x200000200000 2097152 00:03:45.243 malloc 3145728 00:03:45.243 register 0x200000400000 4194304 00:03:45.243 buf 0x2000004fffc0 len 3145728 PASSED 00:03:45.243 malloc 64 00:03:45.243 buf 0x2000004ffec0 len 64 PASSED 00:03:45.243 malloc 4194304 00:03:45.243 register 0x200000800000 6291456 00:03:45.243 buf 0x2000009fffc0 len 4194304 PASSED 00:03:45.243 free 0x2000004fffc0 3145728 00:03:45.243 free 0x2000004ffec0 64 00:03:45.243 unregister 0x200000400000 4194304 PASSED 00:03:45.243 free 0x2000009fffc0 4194304 00:03:45.243 unregister 0x200000800000 6291456 PASSED 00:03:45.243 malloc 8388608 00:03:45.243 register 0x200000400000 10485760 00:03:45.243 buf 0x2000005fffc0 len 8388608 PASSED 00:03:45.243 free 0x2000005fffc0 8388608 00:03:45.243 unregister 0x200000400000 10485760 PASSED 00:03:45.243 passed 00:03:45.243 00:03:45.243 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.243 suites 1 1 n/a 0 0 00:03:45.243 tests 1 1 1 0 0 00:03:45.243 asserts 15 15 15 0 n/a 00:03:45.243 00:03:45.243 Elapsed time = 0.047 seconds 00:03:45.243 00:03:45.243 real 0m0.214s 00:03:45.243 user 0m0.063s 00:03:45.243 sys 0m0.048s 00:03:45.243 01:16:41 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.243 ************************************ 00:03:45.243 END TEST env_mem_callbacks 00:03:45.243 ************************************ 00:03:45.243 01:16:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:45.243 00:03:45.243 real 0m5.360s 00:03:45.243 user 0m4.060s 00:03:45.243 sys 0m0.931s 00:03:45.243 01:16:41 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.243 01:16:41 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.243 ************************************ 00:03:45.243 END TEST env 00:03:45.243 ************************************ 00:03:45.243 01:16:41 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:45.243 01:16:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.243 01:16:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.243 01:16:41 -- common/autotest_common.sh@10 -- # set +x 00:03:45.243 ************************************ 00:03:45.243 START TEST rpc 00:03:45.243 ************************************ 00:03:45.243 01:16:41 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:45.243 * Looking for test storage... 00:03:45.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:45.243 01:16:41 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:45.243 01:16:41 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:45.243 01:16:41 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:45.500 01:16:41 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.500 01:16:41 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.500 01:16:41 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.500 01:16:41 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.500 01:16:41 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.500 01:16:41 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.500 01:16:41 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.500 01:16:41 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.500 01:16:41 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.500 01:16:41 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.500 01:16:41 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.500 01:16:41 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:45.500 01:16:41 rpc -- scripts/common.sh@345 -- # : 1 00:03:45.500 01:16:41 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.500 01:16:41 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.500 01:16:41 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:45.500 01:16:41 rpc -- scripts/common.sh@353 -- # local d=1 00:03:45.500 01:16:41 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.500 01:16:41 rpc -- scripts/common.sh@355 -- # echo 1 00:03:45.500 01:16:41 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.500 01:16:41 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:45.500 01:16:41 rpc -- scripts/common.sh@353 -- # local d=2 00:03:45.500 01:16:41 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.500 01:16:41 rpc -- scripts/common.sh@355 -- # echo 2 00:03:45.500 01:16:41 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.500 01:16:41 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.500 01:16:41 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.500 01:16:41 rpc -- scripts/common.sh@368 -- # return 0 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:45.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.500 --rc genhtml_branch_coverage=1 00:03:45.500 --rc genhtml_function_coverage=1 00:03:45.500 --rc genhtml_legend=1 00:03:45.500 --rc geninfo_all_blocks=1 00:03:45.500 --rc geninfo_unexecuted_blocks=1 00:03:45.500 00:03:45.500 ' 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:45.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.500 --rc genhtml_branch_coverage=1 00:03:45.500 --rc genhtml_function_coverage=1 00:03:45.500 --rc genhtml_legend=1 00:03:45.500 --rc geninfo_all_blocks=1 00:03:45.500 --rc geninfo_unexecuted_blocks=1 00:03:45.500 00:03:45.500 ' 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:45.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.500 --rc genhtml_branch_coverage=1 00:03:45.500 --rc genhtml_function_coverage=1 00:03:45.500 --rc genhtml_legend=1 00:03:45.500 --rc geninfo_all_blocks=1 00:03:45.500 --rc geninfo_unexecuted_blocks=1 00:03:45.500 00:03:45.500 ' 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:45.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.500 --rc genhtml_branch_coverage=1 00:03:45.500 --rc genhtml_function_coverage=1 00:03:45.500 --rc genhtml_legend=1 00:03:45.500 --rc geninfo_all_blocks=1 00:03:45.500 --rc geninfo_unexecuted_blocks=1 00:03:45.500 00:03:45.500 ' 00:03:45.500 01:16:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57462 00:03:45.500 01:16:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.500 01:16:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57462 00:03:45.500 01:16:41 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@831 -- # '[' -z 57462 ']' 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:45.500 01:16:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.500 [2024-09-28 01:16:41.299634] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:03:45.500 [2024-09-28 01:16:41.299761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57462 ] 00:03:45.758 [2024-09-28 01:16:41.447001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.758 [2024-09-28 01:16:41.625877] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:45.758 [2024-09-28 01:16:41.625934] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57462' to capture a snapshot of events at runtime. 00:03:45.758 [2024-09-28 01:16:41.625944] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:45.758 [2024-09-28 01:16:41.625953] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:45.758 [2024-09-28 01:16:41.625961] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57462 for offline analysis/debug. 00:03:45.758 [2024-09-28 01:16:41.625994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.324 01:16:42 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:46.324 01:16:42 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:46.324 01:16:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.324 01:16:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.324 01:16:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:46.324 01:16:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:46.324 01:16:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.324 01:16:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.324 01:16:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.324 ************************************ 00:03:46.324 START TEST rpc_integrity 00:03:46.324 ************************************ 00:03:46.324 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:46.324 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:46.324 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.324 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.324 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.324 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:46.324 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:46.582 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:46.582 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:46.582 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.582 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.582 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.582 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:46.582 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:46.582 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.582 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.582 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.582 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:46.582 { 00:03:46.583 "name": "Malloc0", 00:03:46.583 "aliases": [ 00:03:46.583 "c8dc16c8-7f98-45c3-a03e-e7b0d251f6d5" 00:03:46.583 ], 00:03:46.583 "product_name": "Malloc disk", 00:03:46.583 "block_size": 512, 00:03:46.583 "num_blocks": 16384, 00:03:46.583 "uuid": "c8dc16c8-7f98-45c3-a03e-e7b0d251f6d5", 00:03:46.583 "assigned_rate_limits": { 00:03:46.583 "rw_ios_per_sec": 0, 00:03:46.583 "rw_mbytes_per_sec": 0, 00:03:46.583 "r_mbytes_per_sec": 0, 00:03:46.583 "w_mbytes_per_sec": 0 00:03:46.583 }, 00:03:46.583 "claimed": false, 00:03:46.583 "zoned": false, 00:03:46.583 "supported_io_types": { 00:03:46.583 "read": true, 00:03:46.583 "write": true, 00:03:46.583 "unmap": true, 00:03:46.583 "flush": true, 00:03:46.583 "reset": true, 00:03:46.583 "nvme_admin": false, 00:03:46.583 "nvme_io": false, 00:03:46.583 "nvme_io_md": false, 00:03:46.583 "write_zeroes": true, 00:03:46.583 "zcopy": true, 00:03:46.583 "get_zone_info": false, 00:03:46.583 "zone_management": false, 00:03:46.583 "zone_append": false, 00:03:46.583 "compare": false, 00:03:46.583 "compare_and_write": false, 00:03:46.583 "abort": true, 00:03:46.583 "seek_hole": false, 00:03:46.583 "seek_data": false, 00:03:46.583 "copy": true, 00:03:46.583 "nvme_iov_md": false 00:03:46.583 }, 00:03:46.583 "memory_domains": [ 00:03:46.583 { 00:03:46.583 "dma_device_id": "system", 00:03:46.583 "dma_device_type": 1 00:03:46.583 }, 00:03:46.583 { 00:03:46.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.583 "dma_device_type": 2 00:03:46.583 } 00:03:46.583 ], 00:03:46.583 "driver_specific": {} 00:03:46.583 } 00:03:46.583 ]' 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.583 [2024-09-28 01:16:42.328268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:46.583 [2024-09-28 01:16:42.328329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:46.583 [2024-09-28 01:16:42.328355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:03:46.583 [2024-09-28 01:16:42.328368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:46.583 [2024-09-28 01:16:42.330543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:46.583 [2024-09-28 01:16:42.330586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:46.583 Passthru0 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:46.583 { 00:03:46.583 "name": "Malloc0", 00:03:46.583 "aliases": [ 00:03:46.583 "c8dc16c8-7f98-45c3-a03e-e7b0d251f6d5" 00:03:46.583 ], 00:03:46.583 "product_name": "Malloc disk", 00:03:46.583 "block_size": 512, 00:03:46.583 "num_blocks": 16384, 00:03:46.583 "uuid": "c8dc16c8-7f98-45c3-a03e-e7b0d251f6d5", 00:03:46.583 "assigned_rate_limits": { 00:03:46.583 "rw_ios_per_sec": 0, 00:03:46.583 "rw_mbytes_per_sec": 0, 00:03:46.583 "r_mbytes_per_sec": 0, 00:03:46.583 "w_mbytes_per_sec": 0 00:03:46.583 }, 00:03:46.583 "claimed": true, 00:03:46.583 "claim_type": "exclusive_write", 00:03:46.583 "zoned": false, 00:03:46.583 "supported_io_types": { 00:03:46.583 "read": true, 00:03:46.583 "write": true, 00:03:46.583 "unmap": true, 00:03:46.583 "flush": true, 00:03:46.583 "reset": true, 00:03:46.583 "nvme_admin": false, 00:03:46.583 "nvme_io": false, 00:03:46.583 "nvme_io_md": false, 00:03:46.583 "write_zeroes": true, 00:03:46.583 "zcopy": true, 00:03:46.583 "get_zone_info": false, 00:03:46.583 "zone_management": false, 00:03:46.583 "zone_append": false, 00:03:46.583 "compare": false, 00:03:46.583 "compare_and_write": false, 00:03:46.583 "abort": true, 00:03:46.583 "seek_hole": false, 00:03:46.583 "seek_data": false, 00:03:46.583 "copy": true, 00:03:46.583 "nvme_iov_md": false 00:03:46.583 }, 00:03:46.583 "memory_domains": [ 00:03:46.583 { 00:03:46.583 "dma_device_id": "system", 00:03:46.583 "dma_device_type": 1 00:03:46.583 }, 00:03:46.583 { 00:03:46.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.583 "dma_device_type": 2 00:03:46.583 } 00:03:46.583 ], 00:03:46.583 "driver_specific": {} 00:03:46.583 }, 00:03:46.583 { 00:03:46.583 "name": "Passthru0", 00:03:46.583 "aliases": [ 00:03:46.583 "a99376ab-ccac-5602-87c3-ee1f8f00ef46" 00:03:46.583 ], 00:03:46.583 "product_name": "passthru", 00:03:46.583 "block_size": 512, 00:03:46.583 "num_blocks": 16384, 00:03:46.583 "uuid": "a99376ab-ccac-5602-87c3-ee1f8f00ef46", 00:03:46.583 "assigned_rate_limits": { 00:03:46.583 "rw_ios_per_sec": 0, 00:03:46.583 "rw_mbytes_per_sec": 0, 00:03:46.583 "r_mbytes_per_sec": 0, 00:03:46.583 "w_mbytes_per_sec": 0 00:03:46.583 }, 00:03:46.583 "claimed": false, 00:03:46.583 "zoned": false, 00:03:46.583 "supported_io_types": { 00:03:46.583 "read": true, 00:03:46.583 "write": true, 00:03:46.583 "unmap": true, 00:03:46.583 "flush": true, 00:03:46.583 "reset": true, 00:03:46.583 "nvme_admin": false, 00:03:46.583 "nvme_io": false, 00:03:46.583 "nvme_io_md": false, 00:03:46.583 "write_zeroes": true, 00:03:46.583 "zcopy": true, 00:03:46.583 "get_zone_info": false, 00:03:46.583 "zone_management": false, 00:03:46.583 "zone_append": false, 00:03:46.583 "compare": false, 00:03:46.583 "compare_and_write": false, 00:03:46.583 "abort": true, 00:03:46.583 "seek_hole": false, 00:03:46.583 "seek_data": false, 00:03:46.583 "copy": true, 00:03:46.583 "nvme_iov_md": false 00:03:46.583 }, 00:03:46.583 "memory_domains": [ 00:03:46.583 { 00:03:46.583 "dma_device_id": "system", 00:03:46.583 "dma_device_type": 1 00:03:46.583 }, 00:03:46.583 { 00:03:46.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.583 "dma_device_type": 2 00:03:46.583 } 00:03:46.583 ], 00:03:46.583 "driver_specific": { 00:03:46.583 "passthru": { 00:03:46.583 "name": "Passthru0", 00:03:46.583 "base_bdev_name": "Malloc0" 00:03:46.583 } 00:03:46.583 } 00:03:46.583 } 00:03:46.583 ]' 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:46.583 01:16:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:46.583 00:03:46.583 real 0m0.245s 00:03:46.583 user 0m0.115s 00:03:46.583 sys 0m0.042s 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.583 01:16:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.583 ************************************ 00:03:46.583 END TEST rpc_integrity 00:03:46.583 ************************************ 00:03:46.583 01:16:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:46.583 01:16:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.583 01:16:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.583 01:16:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.583 ************************************ 00:03:46.583 START TEST rpc_plugins 00:03:46.583 ************************************ 00:03:46.583 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:46.583 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:46.583 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.583 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.583 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.583 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:46.583 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.844 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:46.844 { 00:03:46.844 "name": "Malloc1", 00:03:46.844 "aliases": [ 00:03:46.844 "9d41f866-39e7-4ada-aeba-67a1e596175b" 00:03:46.844 ], 00:03:46.844 "product_name": "Malloc disk", 00:03:46.844 "block_size": 4096, 00:03:46.844 "num_blocks": 256, 00:03:46.844 "uuid": "9d41f866-39e7-4ada-aeba-67a1e596175b", 00:03:46.844 "assigned_rate_limits": { 00:03:46.844 "rw_ios_per_sec": 0, 00:03:46.844 "rw_mbytes_per_sec": 0, 00:03:46.844 "r_mbytes_per_sec": 0, 00:03:46.844 "w_mbytes_per_sec": 0 00:03:46.844 }, 00:03:46.844 "claimed": false, 00:03:46.844 "zoned": false, 00:03:46.844 "supported_io_types": { 00:03:46.844 "read": true, 00:03:46.844 "write": true, 00:03:46.844 "unmap": true, 00:03:46.844 "flush": true, 00:03:46.844 "reset": true, 00:03:46.844 "nvme_admin": false, 00:03:46.844 "nvme_io": false, 00:03:46.844 "nvme_io_md": false, 00:03:46.844 "write_zeroes": true, 00:03:46.844 "zcopy": true, 00:03:46.844 "get_zone_info": false, 00:03:46.844 "zone_management": false, 00:03:46.844 "zone_append": false, 00:03:46.844 "compare": false, 00:03:46.844 "compare_and_write": false, 00:03:46.844 "abort": true, 00:03:46.844 "seek_hole": false, 00:03:46.844 "seek_data": false, 00:03:46.844 "copy": true, 00:03:46.844 "nvme_iov_md": false 00:03:46.844 }, 00:03:46.844 "memory_domains": [ 00:03:46.844 { 00:03:46.844 "dma_device_id": "system", 00:03:46.844 "dma_device_type": 1 00:03:46.844 }, 00:03:46.844 { 00:03:46.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.844 "dma_device_type": 2 00:03:46.844 } 00:03:46.844 ], 00:03:46.844 "driver_specific": {} 00:03:46.844 } 00:03:46.844 ]' 00:03:46.844 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:46.844 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:46.844 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.844 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.844 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:46.844 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:46.844 01:16:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:46.844 00:03:46.844 real 0m0.122s 00:03:46.844 user 0m0.056s 00:03:46.844 sys 0m0.019s 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.844 01:16:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.844 ************************************ 00:03:46.844 END TEST rpc_plugins 00:03:46.844 ************************************ 00:03:46.844 01:16:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:46.844 01:16:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.844 01:16:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.844 01:16:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.844 ************************************ 00:03:46.844 START TEST rpc_trace_cmd_test 00:03:46.844 ************************************ 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:46.844 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57462", 00:03:46.844 "tpoint_group_mask": "0x8", 00:03:46.844 "iscsi_conn": { 00:03:46.844 "mask": "0x2", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "scsi": { 00:03:46.844 "mask": "0x4", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "bdev": { 00:03:46.844 "mask": "0x8", 00:03:46.844 "tpoint_mask": "0xffffffffffffffff" 00:03:46.844 }, 00:03:46.844 "nvmf_rdma": { 00:03:46.844 "mask": "0x10", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "nvmf_tcp": { 00:03:46.844 "mask": "0x20", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "ftl": { 00:03:46.844 "mask": "0x40", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "blobfs": { 00:03:46.844 "mask": "0x80", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "dsa": { 00:03:46.844 "mask": "0x200", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "thread": { 00:03:46.844 "mask": "0x400", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "nvme_pcie": { 00:03:46.844 "mask": "0x800", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "iaa": { 00:03:46.844 "mask": "0x1000", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "nvme_tcp": { 00:03:46.844 "mask": "0x2000", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "bdev_nvme": { 00:03:46.844 "mask": "0x4000", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "sock": { 00:03:46.844 "mask": "0x8000", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "blob": { 00:03:46.844 "mask": "0x10000", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 }, 00:03:46.844 "bdev_raid": { 00:03:46.844 "mask": "0x20000", 00:03:46.844 "tpoint_mask": "0x0" 00:03:46.844 } 00:03:46.844 }' 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:46.844 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:46.845 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:46.845 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:46.845 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:47.106 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:47.106 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:47.106 01:16:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:47.106 00:03:47.106 real 0m0.165s 00:03:47.106 user 0m0.129s 00:03:47.106 sys 0m0.025s 00:03:47.106 01:16:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.106 01:16:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:47.106 ************************************ 00:03:47.106 END TEST rpc_trace_cmd_test 00:03:47.106 ************************************ 00:03:47.106 01:16:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:47.106 01:16:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:47.106 01:16:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:47.106 01:16:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.106 01:16:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.106 01:16:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.106 ************************************ 00:03:47.106 START TEST rpc_daemon_integrity 00:03:47.106 ************************************ 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.106 { 00:03:47.106 "name": "Malloc2", 00:03:47.106 "aliases": [ 00:03:47.106 "61dca497-ae54-4501-8189-fc0a092b7f1b" 00:03:47.106 ], 00:03:47.106 "product_name": "Malloc disk", 00:03:47.106 "block_size": 512, 00:03:47.106 "num_blocks": 16384, 00:03:47.106 "uuid": "61dca497-ae54-4501-8189-fc0a092b7f1b", 00:03:47.106 "assigned_rate_limits": { 00:03:47.106 "rw_ios_per_sec": 0, 00:03:47.106 "rw_mbytes_per_sec": 0, 00:03:47.106 "r_mbytes_per_sec": 0, 00:03:47.106 "w_mbytes_per_sec": 0 00:03:47.106 }, 00:03:47.106 "claimed": false, 00:03:47.106 "zoned": false, 00:03:47.106 "supported_io_types": { 00:03:47.106 "read": true, 00:03:47.106 "write": true, 00:03:47.106 "unmap": true, 00:03:47.106 "flush": true, 00:03:47.106 "reset": true, 00:03:47.106 "nvme_admin": false, 00:03:47.106 "nvme_io": false, 00:03:47.106 "nvme_io_md": false, 00:03:47.106 "write_zeroes": true, 00:03:47.106 "zcopy": true, 00:03:47.106 "get_zone_info": false, 00:03:47.106 "zone_management": false, 00:03:47.106 "zone_append": false, 00:03:47.106 "compare": false, 00:03:47.106 "compare_and_write": false, 00:03:47.106 "abort": true, 00:03:47.106 "seek_hole": false, 00:03:47.106 "seek_data": false, 00:03:47.106 "copy": true, 00:03:47.106 "nvme_iov_md": false 00:03:47.106 }, 00:03:47.106 "memory_domains": [ 00:03:47.106 { 00:03:47.106 "dma_device_id": "system", 00:03:47.106 "dma_device_type": 1 00:03:47.106 }, 00:03:47.106 { 00:03:47.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.106 "dma_device_type": 2 00:03:47.106 } 00:03:47.106 ], 00:03:47.106 "driver_specific": {} 00:03:47.106 } 00:03:47.106 ]' 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.106 [2024-09-28 01:16:42.975200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:47.106 [2024-09-28 01:16:42.975253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.106 [2024-09-28 01:16:42.975273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:03:47.106 [2024-09-28 01:16:42.975283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.106 [2024-09-28 01:16:42.977427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.106 [2024-09-28 01:16:42.977464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.106 Passthru0 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.106 01:16:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.107 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.107 01:16:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.107 { 00:03:47.107 "name": "Malloc2", 00:03:47.107 "aliases": [ 00:03:47.107 "61dca497-ae54-4501-8189-fc0a092b7f1b" 00:03:47.107 ], 00:03:47.107 "product_name": "Malloc disk", 00:03:47.107 "block_size": 512, 00:03:47.107 "num_blocks": 16384, 00:03:47.107 "uuid": "61dca497-ae54-4501-8189-fc0a092b7f1b", 00:03:47.107 "assigned_rate_limits": { 00:03:47.107 "rw_ios_per_sec": 0, 00:03:47.107 "rw_mbytes_per_sec": 0, 00:03:47.107 "r_mbytes_per_sec": 0, 00:03:47.107 "w_mbytes_per_sec": 0 00:03:47.107 }, 00:03:47.107 "claimed": true, 00:03:47.107 "claim_type": "exclusive_write", 00:03:47.107 "zoned": false, 00:03:47.107 "supported_io_types": { 00:03:47.107 "read": true, 00:03:47.107 "write": true, 00:03:47.107 "unmap": true, 00:03:47.107 "flush": true, 00:03:47.107 "reset": true, 00:03:47.107 "nvme_admin": false, 00:03:47.107 "nvme_io": false, 00:03:47.107 "nvme_io_md": false, 00:03:47.107 "write_zeroes": true, 00:03:47.107 "zcopy": true, 00:03:47.107 "get_zone_info": false, 00:03:47.107 "zone_management": false, 00:03:47.107 "zone_append": false, 00:03:47.107 "compare": false, 00:03:47.107 "compare_and_write": false, 00:03:47.107 "abort": true, 00:03:47.107 "seek_hole": false, 00:03:47.107 "seek_data": false, 00:03:47.107 "copy": true, 00:03:47.107 "nvme_iov_md": false 00:03:47.107 }, 00:03:47.107 "memory_domains": [ 00:03:47.107 { 00:03:47.107 "dma_device_id": "system", 00:03:47.107 "dma_device_type": 1 00:03:47.107 }, 00:03:47.107 { 00:03:47.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.107 "dma_device_type": 2 00:03:47.107 } 00:03:47.107 ], 00:03:47.107 "driver_specific": {} 00:03:47.107 }, 00:03:47.107 { 00:03:47.107 "name": "Passthru0", 00:03:47.107 "aliases": [ 00:03:47.107 "eef8063f-b3f7-5880-83f6-4325544d37bb" 00:03:47.107 ], 00:03:47.107 "product_name": "passthru", 00:03:47.107 "block_size": 512, 00:03:47.107 "num_blocks": 16384, 00:03:47.107 "uuid": "eef8063f-b3f7-5880-83f6-4325544d37bb", 00:03:47.107 "assigned_rate_limits": { 00:03:47.107 "rw_ios_per_sec": 0, 00:03:47.107 "rw_mbytes_per_sec": 0, 00:03:47.107 "r_mbytes_per_sec": 0, 00:03:47.107 "w_mbytes_per_sec": 0 00:03:47.107 }, 00:03:47.107 "claimed": false, 00:03:47.107 "zoned": false, 00:03:47.107 "supported_io_types": { 00:03:47.107 "read": true, 00:03:47.107 "write": true, 00:03:47.107 "unmap": true, 00:03:47.107 "flush": true, 00:03:47.107 "reset": true, 00:03:47.107 "nvme_admin": false, 00:03:47.107 "nvme_io": false, 00:03:47.107 "nvme_io_md": false, 00:03:47.107 "write_zeroes": true, 00:03:47.107 "zcopy": true, 00:03:47.107 "get_zone_info": false, 00:03:47.107 "zone_management": false, 00:03:47.107 "zone_append": false, 00:03:47.107 "compare": false, 00:03:47.107 "compare_and_write": false, 00:03:47.107 "abort": true, 00:03:47.107 "seek_hole": false, 00:03:47.107 "seek_data": false, 00:03:47.107 "copy": true, 00:03:47.107 "nvme_iov_md": false 00:03:47.107 }, 00:03:47.107 "memory_domains": [ 00:03:47.107 { 00:03:47.107 "dma_device_id": "system", 00:03:47.107 "dma_device_type": 1 00:03:47.107 }, 00:03:47.107 { 00:03:47.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.107 "dma_device_type": 2 00:03:47.107 } 00:03:47.107 ], 00:03:47.107 "driver_specific": { 00:03:47.107 "passthru": { 00:03:47.107 "name": "Passthru0", 00:03:47.107 "base_bdev_name": "Malloc2" 00:03:47.107 } 00:03:47.107 } 00:03:47.107 } 00:03:47.107 ]' 00:03:47.107 01:16:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.368 00:03:47.368 real 0m0.247s 00:03:47.368 user 0m0.134s 00:03:47.368 sys 0m0.032s 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.368 01:16:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.368 ************************************ 00:03:47.368 END TEST rpc_daemon_integrity 00:03:47.368 ************************************ 00:03:47.368 01:16:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:47.368 01:16:43 rpc -- rpc/rpc.sh@84 -- # killprocess 57462 00:03:47.368 01:16:43 rpc -- common/autotest_common.sh@950 -- # '[' -z 57462 ']' 00:03:47.368 01:16:43 rpc -- common/autotest_common.sh@954 -- # kill -0 57462 00:03:47.368 01:16:43 rpc -- common/autotest_common.sh@955 -- # uname 00:03:47.368 01:16:43 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:47.368 01:16:43 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57462 00:03:47.368 01:16:43 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:47.368 killing process with pid 57462 00:03:47.368 01:16:43 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:47.368 01:16:43 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57462' 00:03:47.368 01:16:43 rpc -- common/autotest_common.sh@969 -- # kill 57462 00:03:47.368 01:16:43 rpc -- common/autotest_common.sh@974 -- # wait 57462 00:03:49.289 00:03:49.289 real 0m3.804s 00:03:49.289 user 0m4.185s 00:03:49.289 sys 0m0.627s 00:03:49.289 01:16:44 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.289 01:16:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.289 ************************************ 00:03:49.289 END TEST rpc 00:03:49.289 ************************************ 00:03:49.289 01:16:44 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:49.289 01:16:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.289 01:16:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.289 01:16:44 -- common/autotest_common.sh@10 -- # set +x 00:03:49.289 ************************************ 00:03:49.289 START TEST skip_rpc 00:03:49.289 ************************************ 00:03:49.289 01:16:44 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:49.289 * Looking for test storage... 00:03:49.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:49.289 01:16:45 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:49.289 01:16:45 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:49.289 01:16:45 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:49.289 01:16:45 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.289 01:16:45 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:49.289 01:16:45 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.289 01:16:45 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:49.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.289 --rc genhtml_branch_coverage=1 00:03:49.289 --rc genhtml_function_coverage=1 00:03:49.289 --rc genhtml_legend=1 00:03:49.289 --rc geninfo_all_blocks=1 00:03:49.289 --rc geninfo_unexecuted_blocks=1 00:03:49.289 00:03:49.289 ' 00:03:49.289 01:16:45 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:49.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.290 --rc genhtml_branch_coverage=1 00:03:49.290 --rc genhtml_function_coverage=1 00:03:49.290 --rc genhtml_legend=1 00:03:49.290 --rc geninfo_all_blocks=1 00:03:49.290 --rc geninfo_unexecuted_blocks=1 00:03:49.290 00:03:49.290 ' 00:03:49.290 01:16:45 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:49.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.290 --rc genhtml_branch_coverage=1 00:03:49.290 --rc genhtml_function_coverage=1 00:03:49.290 --rc genhtml_legend=1 00:03:49.290 --rc geninfo_all_blocks=1 00:03:49.290 --rc geninfo_unexecuted_blocks=1 00:03:49.290 00:03:49.290 ' 00:03:49.290 01:16:45 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:49.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.290 --rc genhtml_branch_coverage=1 00:03:49.290 --rc genhtml_function_coverage=1 00:03:49.290 --rc genhtml_legend=1 00:03:49.290 --rc geninfo_all_blocks=1 00:03:49.290 --rc geninfo_unexecuted_blocks=1 00:03:49.290 00:03:49.290 ' 00:03:49.290 01:16:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:49.290 01:16:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:49.290 01:16:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:49.290 01:16:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.290 01:16:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.290 01:16:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.290 ************************************ 00:03:49.290 START TEST skip_rpc 00:03:49.290 ************************************ 00:03:49.290 01:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:49.290 01:16:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57674 00:03:49.290 01:16:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.290 01:16:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:49.290 01:16:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:49.290 [2024-09-28 01:16:45.181377] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:03:49.290 [2024-09-28 01:16:45.181513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57674 ] 00:03:49.556 [2024-09-28 01:16:45.330466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.829 [2024-09-28 01:16:45.485392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57674 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57674 ']' 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57674 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57674 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:55.093 killing process with pid 57674 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57674' 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57674 00:03:55.093 01:16:50 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57674 00:03:55.659 00:03:55.659 real 0m6.269s 00:03:55.659 user 0m5.891s 00:03:55.659 sys 0m0.274s 00:03:55.659 01:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.659 01:16:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.659 ************************************ 00:03:55.659 END TEST skip_rpc 00:03:55.659 ************************************ 00:03:55.659 01:16:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:55.659 01:16:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.659 01:16:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.659 01:16:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.659 ************************************ 00:03:55.659 START TEST skip_rpc_with_json 00:03:55.659 ************************************ 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57767 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57767 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57767 ']' 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:55.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:55.659 01:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.659 [2024-09-28 01:16:51.492076] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:03:55.659 [2024-09-28 01:16:51.492215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57767 ] 00:03:55.918 [2024-09-28 01:16:51.639466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.918 [2024-09-28 01:16:51.787349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.485 [2024-09-28 01:16:52.354915] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:56.485 request: 00:03:56.485 { 00:03:56.485 "trtype": "tcp", 00:03:56.485 "method": "nvmf_get_transports", 00:03:56.485 "req_id": 1 00:03:56.485 } 00:03:56.485 Got JSON-RPC error response 00:03:56.485 response: 00:03:56.485 { 00:03:56.485 "code": -19, 00:03:56.485 "message": "No such device" 00:03:56.485 } 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.485 [2024-09-28 01:16:52.366999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.485 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.744 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.744 01:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:56.744 { 00:03:56.744 "subsystems": [ 00:03:56.744 { 00:03:56.744 "subsystem": "fsdev", 00:03:56.744 "config": [ 00:03:56.744 { 00:03:56.744 "method": "fsdev_set_opts", 00:03:56.744 "params": { 00:03:56.744 "fsdev_io_pool_size": 65535, 00:03:56.744 "fsdev_io_cache_size": 256 00:03:56.744 } 00:03:56.744 } 00:03:56.744 ] 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "subsystem": "keyring", 00:03:56.744 "config": [] 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "subsystem": "iobuf", 00:03:56.744 "config": [ 00:03:56.744 { 00:03:56.744 "method": "iobuf_set_options", 00:03:56.744 "params": { 00:03:56.744 "small_pool_count": 8192, 00:03:56.744 "large_pool_count": 1024, 00:03:56.744 "small_bufsize": 8192, 00:03:56.744 "large_bufsize": 135168 00:03:56.744 } 00:03:56.744 } 00:03:56.744 ] 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "subsystem": "sock", 00:03:56.744 "config": [ 00:03:56.744 { 00:03:56.744 "method": "sock_set_default_impl", 00:03:56.744 "params": { 00:03:56.744 "impl_name": "posix" 00:03:56.744 } 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "method": "sock_impl_set_options", 00:03:56.744 "params": { 00:03:56.744 "impl_name": "ssl", 00:03:56.744 "recv_buf_size": 4096, 00:03:56.744 "send_buf_size": 4096, 00:03:56.744 "enable_recv_pipe": true, 00:03:56.744 "enable_quickack": false, 00:03:56.744 "enable_placement_id": 0, 00:03:56.744 "enable_zerocopy_send_server": true, 00:03:56.744 "enable_zerocopy_send_client": false, 00:03:56.744 "zerocopy_threshold": 0, 00:03:56.744 "tls_version": 0, 00:03:56.744 "enable_ktls": false 00:03:56.744 } 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "method": "sock_impl_set_options", 00:03:56.744 "params": { 00:03:56.744 "impl_name": "posix", 00:03:56.744 "recv_buf_size": 2097152, 00:03:56.744 "send_buf_size": 2097152, 00:03:56.744 "enable_recv_pipe": true, 00:03:56.744 "enable_quickack": false, 00:03:56.744 "enable_placement_id": 0, 00:03:56.744 "enable_zerocopy_send_server": true, 00:03:56.744 "enable_zerocopy_send_client": false, 00:03:56.744 "zerocopy_threshold": 0, 00:03:56.744 "tls_version": 0, 00:03:56.744 "enable_ktls": false 00:03:56.744 } 00:03:56.744 } 00:03:56.744 ] 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "subsystem": "vmd", 00:03:56.744 "config": [] 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "subsystem": "accel", 00:03:56.744 "config": [ 00:03:56.744 { 00:03:56.744 "method": "accel_set_options", 00:03:56.744 "params": { 00:03:56.744 "small_cache_size": 128, 00:03:56.744 "large_cache_size": 16, 00:03:56.744 "task_count": 2048, 00:03:56.744 "sequence_count": 2048, 00:03:56.744 "buf_count": 2048 00:03:56.744 } 00:03:56.744 } 00:03:56.744 ] 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "subsystem": "bdev", 00:03:56.744 "config": [ 00:03:56.744 { 00:03:56.744 "method": "bdev_set_options", 00:03:56.744 "params": { 00:03:56.744 "bdev_io_pool_size": 65535, 00:03:56.744 "bdev_io_cache_size": 256, 00:03:56.744 "bdev_auto_examine": true, 00:03:56.744 "iobuf_small_cache_size": 128, 00:03:56.744 "iobuf_large_cache_size": 16 00:03:56.744 } 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "method": "bdev_raid_set_options", 00:03:56.744 "params": { 00:03:56.744 "process_window_size_kb": 1024, 00:03:56.744 "process_max_bandwidth_mb_sec": 0 00:03:56.744 } 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "method": "bdev_iscsi_set_options", 00:03:56.744 "params": { 00:03:56.744 "timeout_sec": 30 00:03:56.744 } 00:03:56.744 }, 00:03:56.744 { 00:03:56.744 "method": "bdev_nvme_set_options", 00:03:56.744 "params": { 00:03:56.744 "action_on_timeout": "none", 00:03:56.744 "timeout_us": 0, 00:03:56.745 "timeout_admin_us": 0, 00:03:56.745 "keep_alive_timeout_ms": 10000, 00:03:56.745 "arbitration_burst": 0, 00:03:56.745 "low_priority_weight": 0, 00:03:56.745 "medium_priority_weight": 0, 00:03:56.745 "high_priority_weight": 0, 00:03:56.745 "nvme_adminq_poll_period_us": 10000, 00:03:56.745 "nvme_ioq_poll_period_us": 0, 00:03:56.745 "io_queue_requests": 0, 00:03:56.745 "delay_cmd_submit": true, 00:03:56.745 "transport_retry_count": 4, 00:03:56.745 "bdev_retry_count": 3, 00:03:56.745 "transport_ack_timeout": 0, 00:03:56.745 "ctrlr_loss_timeout_sec": 0, 00:03:56.745 "reconnect_delay_sec": 0, 00:03:56.745 "fast_io_fail_timeout_sec": 0, 00:03:56.745 "disable_auto_failback": false, 00:03:56.745 "generate_uuids": false, 00:03:56.745 "transport_tos": 0, 00:03:56.745 "nvme_error_stat": false, 00:03:56.745 "rdma_srq_size": 0, 00:03:56.745 "io_path_stat": false, 00:03:56.745 "allow_accel_sequence": false, 00:03:56.745 "rdma_max_cq_size": 0, 00:03:56.745 "rdma_cm_event_timeout_ms": 0, 00:03:56.745 "dhchap_digests": [ 00:03:56.745 "sha256", 00:03:56.745 "sha384", 00:03:56.745 "sha512" 00:03:56.745 ], 00:03:56.745 "dhchap_dhgroups": [ 00:03:56.745 "null", 00:03:56.745 "ffdhe2048", 00:03:56.745 "ffdhe3072", 00:03:56.745 "ffdhe4096", 00:03:56.745 "ffdhe6144", 00:03:56.745 "ffdhe8192" 00:03:56.745 ] 00:03:56.745 } 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "method": "bdev_nvme_set_hotplug", 00:03:56.745 "params": { 00:03:56.745 "period_us": 100000, 00:03:56.745 "enable": false 00:03:56.745 } 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "method": "bdev_wait_for_examine" 00:03:56.745 } 00:03:56.745 ] 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "subsystem": "scsi", 00:03:56.745 "config": null 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "subsystem": "scheduler", 00:03:56.745 "config": [ 00:03:56.745 { 00:03:56.745 "method": "framework_set_scheduler", 00:03:56.745 "params": { 00:03:56.745 "name": "static" 00:03:56.745 } 00:03:56.745 } 00:03:56.745 ] 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "subsystem": "vhost_scsi", 00:03:56.745 "config": [] 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "subsystem": "vhost_blk", 00:03:56.745 "config": [] 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "subsystem": "ublk", 00:03:56.745 "config": [] 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "subsystem": "nbd", 00:03:56.745 "config": [] 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "subsystem": "nvmf", 00:03:56.745 "config": [ 00:03:56.745 { 00:03:56.745 "method": "nvmf_set_config", 00:03:56.745 "params": { 00:03:56.745 "discovery_filter": "match_any", 00:03:56.745 "admin_cmd_passthru": { 00:03:56.745 "identify_ctrlr": false 00:03:56.745 }, 00:03:56.745 "dhchap_digests": [ 00:03:56.745 "sha256", 00:03:56.745 "sha384", 00:03:56.745 "sha512" 00:03:56.745 ], 00:03:56.745 "dhchap_dhgroups": [ 00:03:56.745 "null", 00:03:56.745 "ffdhe2048", 00:03:56.745 "ffdhe3072", 00:03:56.745 "ffdhe4096", 00:03:56.745 "ffdhe6144", 00:03:56.745 "ffdhe8192" 00:03:56.745 ] 00:03:56.745 } 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "method": "nvmf_set_max_subsystems", 00:03:56.745 "params": { 00:03:56.745 "max_subsystems": 1024 00:03:56.745 } 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "method": "nvmf_set_crdt", 00:03:56.745 "params": { 00:03:56.745 "crdt1": 0, 00:03:56.745 "crdt2": 0, 00:03:56.745 "crdt3": 0 00:03:56.745 } 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "method": "nvmf_create_transport", 00:03:56.745 "params": { 00:03:56.745 "trtype": "TCP", 00:03:56.745 "max_queue_depth": 128, 00:03:56.745 "max_io_qpairs_per_ctrlr": 127, 00:03:56.745 "in_capsule_data_size": 4096, 00:03:56.745 "max_io_size": 131072, 00:03:56.745 "io_unit_size": 131072, 00:03:56.745 "max_aq_depth": 128, 00:03:56.745 "num_shared_buffers": 511, 00:03:56.745 "buf_cache_size": 4294967295, 00:03:56.745 "dif_insert_or_strip": false, 00:03:56.745 "zcopy": false, 00:03:56.745 "c2h_success": true, 00:03:56.745 "sock_priority": 0, 00:03:56.745 "abort_timeout_sec": 1, 00:03:56.745 "ack_timeout": 0, 00:03:56.745 "data_wr_pool_size": 0 00:03:56.745 } 00:03:56.745 } 00:03:56.745 ] 00:03:56.745 }, 00:03:56.745 { 00:03:56.745 "subsystem": "iscsi", 00:03:56.745 "config": [ 00:03:56.745 { 00:03:56.745 "method": "iscsi_set_options", 00:03:56.745 "params": { 00:03:56.745 "node_base": "iqn.2016-06.io.spdk", 00:03:56.745 "max_sessions": 128, 00:03:56.745 "max_connections_per_session": 2, 00:03:56.745 "max_queue_depth": 64, 00:03:56.745 "default_time2wait": 2, 00:03:56.745 "default_time2retain": 20, 00:03:56.745 "first_burst_length": 8192, 00:03:56.745 "immediate_data": true, 00:03:56.745 "allow_duplicated_isid": false, 00:03:56.745 "error_recovery_level": 0, 00:03:56.745 "nop_timeout": 60, 00:03:56.745 "nop_in_interval": 30, 00:03:56.745 "disable_chap": false, 00:03:56.745 "require_chap": false, 00:03:56.745 "mutual_chap": false, 00:03:56.745 "chap_group": 0, 00:03:56.745 "max_large_datain_per_connection": 64, 00:03:56.745 "max_r2t_per_connection": 4, 00:03:56.745 "pdu_pool_size": 36864, 00:03:56.745 "immediate_data_pool_size": 16384, 00:03:56.745 "data_out_pool_size": 2048 00:03:56.745 } 00:03:56.745 } 00:03:56.745 ] 00:03:56.745 } 00:03:56.745 ] 00:03:56.745 } 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57767 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57767 ']' 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57767 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57767 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:56.745 killing process with pid 57767 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57767' 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57767 00:03:56.745 01:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57767 00:03:58.121 01:16:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57807 00:03:58.121 01:16:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:58.121 01:16:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57807 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57807 ']' 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57807 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57807 00:04:03.445 killing process with pid 57807 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57807' 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57807 00:04:03.445 01:16:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57807 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:04.377 ************************************ 00:04:04.377 END TEST skip_rpc_with_json 00:04:04.377 ************************************ 00:04:04.377 00:04:04.377 real 0m8.681s 00:04:04.377 user 0m8.331s 00:04:04.377 sys 0m0.602s 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.377 01:17:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:04.377 01:17:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.377 01:17:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.377 01:17:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.377 ************************************ 00:04:04.377 START TEST skip_rpc_with_delay 00:04:04.377 ************************************ 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.377 [2024-09-28 01:17:00.197515] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:04.377 [2024-09-28 01:17:00.197800] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:04.377 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:04.378 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:04.378 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:04.378 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:04.378 00:04:04.378 real 0m0.122s 00:04:04.378 user 0m0.065s 00:04:04.378 sys 0m0.056s 00:04:04.378 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.378 ************************************ 00:04:04.378 END TEST skip_rpc_with_delay 00:04:04.378 ************************************ 00:04:04.378 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:04.378 01:17:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:04.378 01:17:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:04.378 01:17:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:04.378 01:17:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.378 01:17:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.378 01:17:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.378 ************************************ 00:04:04.378 START TEST exit_on_failed_rpc_init 00:04:04.378 ************************************ 00:04:04.378 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:04.378 01:17:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57929 00:04:04.378 01:17:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.378 01:17:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57929 00:04:04.378 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57929 ']' 00:04:04.378 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.378 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:04.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.378 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.378 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:04.378 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.635 [2024-09-28 01:17:00.370119] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:04.635 [2024-09-28 01:17:00.370645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57929 ] 00:04:04.635 [2024-09-28 01:17:00.530336] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.892 [2024-09-28 01:17:00.680279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:05.458 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.458 [2024-09-28 01:17:01.240886] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:05.458 [2024-09-28 01:17:01.241202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57947 ] 00:04:05.458 [2024-09-28 01:17:01.384777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.716 [2024-09-28 01:17:01.542057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.716 [2024-09-28 01:17:01.542151] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:05.716 [2024-09-28 01:17:01.542165] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:05.716 [2024-09-28 01:17:01.542175] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57929 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57929 ']' 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57929 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57929 00:04:05.974 killing process with pid 57929 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57929' 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57929 00:04:05.974 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57929 00:04:07.347 ************************************ 00:04:07.347 END TEST exit_on_failed_rpc_init 00:04:07.347 ************************************ 00:04:07.347 00:04:07.347 real 0m2.830s 00:04:07.347 user 0m3.264s 00:04:07.347 sys 0m0.393s 00:04:07.347 01:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.347 01:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:07.347 01:17:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:07.347 00:04:07.347 real 0m18.205s 00:04:07.347 user 0m17.689s 00:04:07.347 sys 0m1.487s 00:04:07.347 01:17:03 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.347 ************************************ 00:04:07.347 END TEST skip_rpc 00:04:07.347 ************************************ 00:04:07.347 01:17:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.347 01:17:03 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:07.347 01:17:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.347 01:17:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.347 01:17:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.347 ************************************ 00:04:07.347 START TEST rpc_client 00:04:07.347 ************************************ 00:04:07.347 01:17:03 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:07.347 * Looking for test storage... 00:04:07.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:07.347 01:17:03 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:07.347 01:17:03 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:07.347 01:17:03 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:07.605 01:17:03 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:07.605 01:17:03 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.606 01:17:03 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:07.606 01:17:03 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.606 01:17:03 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.606 01:17:03 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.606 01:17:03 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:07.606 01:17:03 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.606 01:17:03 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.606 --rc genhtml_branch_coverage=1 00:04:07.606 --rc genhtml_function_coverage=1 00:04:07.606 --rc genhtml_legend=1 00:04:07.606 --rc geninfo_all_blocks=1 00:04:07.606 --rc geninfo_unexecuted_blocks=1 00:04:07.606 00:04:07.606 ' 00:04:07.606 01:17:03 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.606 --rc genhtml_branch_coverage=1 00:04:07.606 --rc genhtml_function_coverage=1 00:04:07.606 --rc genhtml_legend=1 00:04:07.606 --rc geninfo_all_blocks=1 00:04:07.606 --rc geninfo_unexecuted_blocks=1 00:04:07.606 00:04:07.606 ' 00:04:07.606 01:17:03 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.606 --rc genhtml_branch_coverage=1 00:04:07.606 --rc genhtml_function_coverage=1 00:04:07.606 --rc genhtml_legend=1 00:04:07.606 --rc geninfo_all_blocks=1 00:04:07.606 --rc geninfo_unexecuted_blocks=1 00:04:07.606 00:04:07.606 ' 00:04:07.606 01:17:03 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:07.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.606 --rc genhtml_branch_coverage=1 00:04:07.606 --rc genhtml_function_coverage=1 00:04:07.606 --rc genhtml_legend=1 00:04:07.606 --rc geninfo_all_blocks=1 00:04:07.606 --rc geninfo_unexecuted_blocks=1 00:04:07.606 00:04:07.606 ' 00:04:07.606 01:17:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:07.606 OK 00:04:07.606 01:17:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:07.606 00:04:07.606 real 0m0.190s 00:04:07.606 user 0m0.101s 00:04:07.606 sys 0m0.094s 00:04:07.606 01:17:03 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.606 ************************************ 00:04:07.606 END TEST rpc_client 00:04:07.606 ************************************ 00:04:07.606 01:17:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:07.606 01:17:03 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:07.606 01:17:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.606 01:17:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.606 01:17:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.606 ************************************ 00:04:07.606 START TEST json_config 00:04:07.606 ************************************ 00:04:07.606 01:17:03 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:07.606 01:17:03 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:07.606 01:17:03 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:07.606 01:17:03 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:07.606 01:17:03 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:07.606 01:17:03 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.606 01:17:03 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.606 01:17:03 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.606 01:17:03 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.606 01:17:03 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.606 01:17:03 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.606 01:17:03 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.606 01:17:03 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.606 01:17:03 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.606 01:17:03 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.606 01:17:03 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.606 01:17:03 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:07.606 01:17:03 json_config -- scripts/common.sh@345 -- # : 1 00:04:07.606 01:17:03 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.606 01:17:03 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.606 01:17:03 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:07.606 01:17:03 json_config -- scripts/common.sh@353 -- # local d=1 00:04:07.606 01:17:03 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.606 01:17:03 json_config -- scripts/common.sh@355 -- # echo 1 00:04:07.606 01:17:03 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.606 01:17:03 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:07.865 01:17:03 json_config -- scripts/common.sh@353 -- # local d=2 00:04:07.865 01:17:03 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.865 01:17:03 json_config -- scripts/common.sh@355 -- # echo 2 00:04:07.865 01:17:03 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.865 01:17:03 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.865 01:17:03 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.865 01:17:03 json_config -- scripts/common.sh@368 -- # return 0 00:04:07.865 01:17:03 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.865 01:17:03 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:07.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.865 --rc genhtml_branch_coverage=1 00:04:07.865 --rc genhtml_function_coverage=1 00:04:07.865 --rc genhtml_legend=1 00:04:07.865 --rc geninfo_all_blocks=1 00:04:07.865 --rc geninfo_unexecuted_blocks=1 00:04:07.865 00:04:07.865 ' 00:04:07.865 01:17:03 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:07.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.865 --rc genhtml_branch_coverage=1 00:04:07.865 --rc genhtml_function_coverage=1 00:04:07.865 --rc genhtml_legend=1 00:04:07.865 --rc geninfo_all_blocks=1 00:04:07.865 --rc geninfo_unexecuted_blocks=1 00:04:07.865 00:04:07.865 ' 00:04:07.865 01:17:03 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:07.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.865 --rc genhtml_branch_coverage=1 00:04:07.866 --rc genhtml_function_coverage=1 00:04:07.866 --rc genhtml_legend=1 00:04:07.866 --rc geninfo_all_blocks=1 00:04:07.866 --rc geninfo_unexecuted_blocks=1 00:04:07.866 00:04:07.866 ' 00:04:07.866 01:17:03 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:07.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.866 --rc genhtml_branch_coverage=1 00:04:07.866 --rc genhtml_function_coverage=1 00:04:07.866 --rc genhtml_legend=1 00:04:07.866 --rc geninfo_all_blocks=1 00:04:07.866 --rc geninfo_unexecuted_blocks=1 00:04:07.866 00:04:07.866 ' 00:04:07.866 01:17:03 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9a58713b-7f88-4222-8855-63517e9111a3 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9a58713b-7f88-4222-8855-63517e9111a3 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:07.866 01:17:03 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.866 01:17:03 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.866 01:17:03 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.866 01:17:03 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.866 01:17:03 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.866 01:17:03 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.866 01:17:03 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.866 01:17:03 json_config -- paths/export.sh@5 -- # export PATH 00:04:07.866 01:17:03 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@51 -- # : 0 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.866 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.866 01:17:03 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.866 01:17:03 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:07.866 01:17:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:07.866 01:17:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:07.866 01:17:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:07.866 01:17:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:07.866 WARNING: No tests are enabled so not running JSON configuration tests 00:04:07.866 01:17:03 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:07.866 01:17:03 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:07.866 00:04:07.866 real 0m0.141s 00:04:07.866 user 0m0.096s 00:04:07.866 sys 0m0.046s 00:04:07.866 01:17:03 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.866 01:17:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.866 ************************************ 00:04:07.866 END TEST json_config 00:04:07.866 ************************************ 00:04:07.866 01:17:03 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:07.866 01:17:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.866 01:17:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.866 01:17:03 -- common/autotest_common.sh@10 -- # set +x 00:04:07.866 ************************************ 00:04:07.866 START TEST json_config_extra_key 00:04:07.866 ************************************ 00:04:07.866 01:17:03 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:07.866 01:17:03 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:07.866 01:17:03 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:07.866 01:17:03 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:07.866 01:17:03 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.866 01:17:03 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:07.866 01:17:03 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.866 01:17:03 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:07.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.866 --rc genhtml_branch_coverage=1 00:04:07.866 --rc genhtml_function_coverage=1 00:04:07.866 --rc genhtml_legend=1 00:04:07.866 --rc geninfo_all_blocks=1 00:04:07.866 --rc geninfo_unexecuted_blocks=1 00:04:07.866 00:04:07.866 ' 00:04:07.866 01:17:03 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:07.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.866 --rc genhtml_branch_coverage=1 00:04:07.866 --rc genhtml_function_coverage=1 00:04:07.866 --rc genhtml_legend=1 00:04:07.866 --rc geninfo_all_blocks=1 00:04:07.866 --rc geninfo_unexecuted_blocks=1 00:04:07.866 00:04:07.866 ' 00:04:07.866 01:17:03 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:07.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.867 --rc genhtml_branch_coverage=1 00:04:07.867 --rc genhtml_function_coverage=1 00:04:07.867 --rc genhtml_legend=1 00:04:07.867 --rc geninfo_all_blocks=1 00:04:07.867 --rc geninfo_unexecuted_blocks=1 00:04:07.867 00:04:07.867 ' 00:04:07.867 01:17:03 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:07.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.867 --rc genhtml_branch_coverage=1 00:04:07.867 --rc genhtml_function_coverage=1 00:04:07.867 --rc genhtml_legend=1 00:04:07.867 --rc geninfo_all_blocks=1 00:04:07.867 --rc geninfo_unexecuted_blocks=1 00:04:07.867 00:04:07.867 ' 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9a58713b-7f88-4222-8855-63517e9111a3 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9a58713b-7f88-4222-8855-63517e9111a3 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:07.867 01:17:03 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.867 01:17:03 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.867 01:17:03 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.867 01:17:03 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.867 01:17:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.867 01:17:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.867 01:17:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.867 01:17:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:07.867 01:17:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.867 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.867 01:17:03 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:07.867 INFO: launching applications... 00:04:07.867 01:17:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58141 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:07.867 Waiting for target to run... 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58141 /var/tmp/spdk_tgt.sock 00:04:07.867 01:17:03 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58141 ']' 00:04:07.867 01:17:03 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:07.867 01:17:03 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:07.867 01:17:03 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:07.867 01:17:03 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:07.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:07.867 01:17:03 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:07.867 01:17:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:08.129 [2024-09-28 01:17:03.815169] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:08.129 [2024-09-28 01:17:03.815430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58141 ] 00:04:08.395 [2024-09-28 01:17:04.107940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.395 [2024-09-28 01:17:04.275362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.961 00:04:08.961 INFO: shutting down applications... 00:04:08.961 01:17:04 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:08.961 01:17:04 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:08.961 01:17:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:08.961 01:17:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:08.961 01:17:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:08.961 01:17:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:08.961 01:17:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:08.961 01:17:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58141 ]] 00:04:08.961 01:17:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58141 00:04:08.961 01:17:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:08.961 01:17:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:08.961 01:17:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58141 00:04:08.961 01:17:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:09.527 01:17:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:09.527 01:17:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:09.527 01:17:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58141 00:04:09.527 01:17:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.094 01:17:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.094 01:17:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.094 01:17:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58141 00:04:10.094 01:17:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.351 01:17:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.351 01:17:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.351 01:17:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58141 00:04:10.351 01:17:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.918 01:17:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.918 01:17:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.918 01:17:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58141 00:04:10.918 SPDK target shutdown done 00:04:10.918 01:17:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:10.918 01:17:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:10.918 01:17:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:10.918 01:17:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:10.918 Success 00:04:10.918 01:17:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:10.918 ************************************ 00:04:10.918 END TEST json_config_extra_key 00:04:10.918 ************************************ 00:04:10.918 00:04:10.918 real 0m3.180s 00:04:10.918 user 0m2.725s 00:04:10.918 sys 0m0.378s 00:04:10.918 01:17:06 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.918 01:17:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:10.918 01:17:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:10.918 01:17:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.918 01:17:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.918 01:17:06 -- common/autotest_common.sh@10 -- # set +x 00:04:10.918 ************************************ 00:04:10.918 START TEST alias_rpc 00:04:10.918 ************************************ 00:04:10.918 01:17:06 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.177 * Looking for test storage... 00:04:11.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.177 01:17:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:11.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.177 --rc genhtml_branch_coverage=1 00:04:11.177 --rc genhtml_function_coverage=1 00:04:11.177 --rc genhtml_legend=1 00:04:11.177 --rc geninfo_all_blocks=1 00:04:11.177 --rc geninfo_unexecuted_blocks=1 00:04:11.177 00:04:11.177 ' 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:11.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.177 --rc genhtml_branch_coverage=1 00:04:11.177 --rc genhtml_function_coverage=1 00:04:11.177 --rc genhtml_legend=1 00:04:11.177 --rc geninfo_all_blocks=1 00:04:11.177 --rc geninfo_unexecuted_blocks=1 00:04:11.177 00:04:11.177 ' 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:11.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.177 --rc genhtml_branch_coverage=1 00:04:11.177 --rc genhtml_function_coverage=1 00:04:11.177 --rc genhtml_legend=1 00:04:11.177 --rc geninfo_all_blocks=1 00:04:11.177 --rc geninfo_unexecuted_blocks=1 00:04:11.177 00:04:11.177 ' 00:04:11.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:11.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.177 --rc genhtml_branch_coverage=1 00:04:11.177 --rc genhtml_function_coverage=1 00:04:11.177 --rc genhtml_legend=1 00:04:11.177 --rc geninfo_all_blocks=1 00:04:11.177 --rc geninfo_unexecuted_blocks=1 00:04:11.177 00:04:11.177 ' 00:04:11.177 01:17:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:11.177 01:17:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58234 00:04:11.177 01:17:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58234 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58234 ']' 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:11.177 01:17:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.177 01:17:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.177 [2024-09-28 01:17:07.049246] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:11.177 [2024-09-28 01:17:07.049364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58234 ] 00:04:11.436 [2024-09-28 01:17:07.187649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.436 [2024-09-28 01:17:07.336494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.002 01:17:07 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:12.002 01:17:07 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:12.002 01:17:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:12.261 01:17:08 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58234 00:04:12.261 01:17:08 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58234 ']' 00:04:12.261 01:17:08 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58234 00:04:12.261 01:17:08 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:12.261 01:17:08 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:12.261 01:17:08 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58234 00:04:12.261 killing process with pid 58234 00:04:12.261 01:17:08 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:12.261 01:17:08 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:12.261 01:17:08 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58234' 00:04:12.261 01:17:08 alias_rpc -- common/autotest_common.sh@969 -- # kill 58234 00:04:12.261 01:17:08 alias_rpc -- common/autotest_common.sh@974 -- # wait 58234 00:04:13.647 ************************************ 00:04:13.647 END TEST alias_rpc 00:04:13.647 ************************************ 00:04:13.647 00:04:13.647 real 0m2.470s 00:04:13.647 user 0m2.488s 00:04:13.647 sys 0m0.392s 00:04:13.647 01:17:09 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.647 01:17:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.647 01:17:09 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:13.647 01:17:09 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:13.647 01:17:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.647 01:17:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.647 01:17:09 -- common/autotest_common.sh@10 -- # set +x 00:04:13.647 ************************************ 00:04:13.647 START TEST spdkcli_tcp 00:04:13.647 ************************************ 00:04:13.647 01:17:09 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:13.647 * Looking for test storage... 00:04:13.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:13.647 01:17:09 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:13.647 01:17:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:13.647 01:17:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:13.647 01:17:09 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.647 01:17:09 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:13.647 01:17:09 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.647 01:17:09 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:13.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.647 --rc genhtml_branch_coverage=1 00:04:13.647 --rc genhtml_function_coverage=1 00:04:13.647 --rc genhtml_legend=1 00:04:13.647 --rc geninfo_all_blocks=1 00:04:13.647 --rc geninfo_unexecuted_blocks=1 00:04:13.647 00:04:13.647 ' 00:04:13.647 01:17:09 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:13.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.647 --rc genhtml_branch_coverage=1 00:04:13.647 --rc genhtml_function_coverage=1 00:04:13.647 --rc genhtml_legend=1 00:04:13.647 --rc geninfo_all_blocks=1 00:04:13.648 --rc geninfo_unexecuted_blocks=1 00:04:13.648 00:04:13.648 ' 00:04:13.648 01:17:09 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:13.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.648 --rc genhtml_branch_coverage=1 00:04:13.648 --rc genhtml_function_coverage=1 00:04:13.648 --rc genhtml_legend=1 00:04:13.648 --rc geninfo_all_blocks=1 00:04:13.648 --rc geninfo_unexecuted_blocks=1 00:04:13.648 00:04:13.648 ' 00:04:13.648 01:17:09 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:13.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.648 --rc genhtml_branch_coverage=1 00:04:13.648 --rc genhtml_function_coverage=1 00:04:13.648 --rc genhtml_legend=1 00:04:13.648 --rc geninfo_all_blocks=1 00:04:13.648 --rc geninfo_unexecuted_blocks=1 00:04:13.648 00:04:13.648 ' 00:04:13.648 01:17:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:13.648 01:17:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:13.648 01:17:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:13.648 01:17:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:13.648 01:17:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:13.648 01:17:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:13.648 01:17:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:13.648 01:17:09 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.648 01:17:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.648 01:17:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58324 00:04:13.648 01:17:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58324 00:04:13.648 01:17:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:13.648 01:17:09 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58324 ']' 00:04:13.648 01:17:09 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.648 01:17:09 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:13.648 01:17:09 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.648 01:17:09 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:13.648 01:17:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.648 [2024-09-28 01:17:09.539239] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:13.648 [2024-09-28 01:17:09.539665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58324 ] 00:04:13.937 [2024-09-28 01:17:09.689258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.937 [2024-09-28 01:17:09.837367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.937 [2024-09-28 01:17:09.837521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.517 01:17:10 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:14.517 01:17:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:14.517 01:17:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:14.517 01:17:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58336 00:04:14.517 01:17:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:14.775 [ 00:04:14.775 "bdev_malloc_delete", 00:04:14.775 "bdev_malloc_create", 00:04:14.775 "bdev_null_resize", 00:04:14.775 "bdev_null_delete", 00:04:14.775 "bdev_null_create", 00:04:14.775 "bdev_nvme_cuse_unregister", 00:04:14.775 "bdev_nvme_cuse_register", 00:04:14.775 "bdev_opal_new_user", 00:04:14.775 "bdev_opal_set_lock_state", 00:04:14.775 "bdev_opal_delete", 00:04:14.775 "bdev_opal_get_info", 00:04:14.775 "bdev_opal_create", 00:04:14.775 "bdev_nvme_opal_revert", 00:04:14.775 "bdev_nvme_opal_init", 00:04:14.775 "bdev_nvme_send_cmd", 00:04:14.775 "bdev_nvme_set_keys", 00:04:14.775 "bdev_nvme_get_path_iostat", 00:04:14.775 "bdev_nvme_get_mdns_discovery_info", 00:04:14.775 "bdev_nvme_stop_mdns_discovery", 00:04:14.775 "bdev_nvme_start_mdns_discovery", 00:04:14.775 "bdev_nvme_set_multipath_policy", 00:04:14.775 "bdev_nvme_set_preferred_path", 00:04:14.775 "bdev_nvme_get_io_paths", 00:04:14.775 "bdev_nvme_remove_error_injection", 00:04:14.776 "bdev_nvme_add_error_injection", 00:04:14.776 "bdev_nvme_get_discovery_info", 00:04:14.776 "bdev_nvme_stop_discovery", 00:04:14.776 "bdev_nvme_start_discovery", 00:04:14.776 "bdev_nvme_get_controller_health_info", 00:04:14.776 "bdev_nvme_disable_controller", 00:04:14.776 "bdev_nvme_enable_controller", 00:04:14.776 "bdev_nvme_reset_controller", 00:04:14.776 "bdev_nvme_get_transport_statistics", 00:04:14.776 "bdev_nvme_apply_firmware", 00:04:14.776 "bdev_nvme_detach_controller", 00:04:14.776 "bdev_nvme_get_controllers", 00:04:14.776 "bdev_nvme_attach_controller", 00:04:14.776 "bdev_nvme_set_hotplug", 00:04:14.776 "bdev_nvme_set_options", 00:04:14.776 "bdev_passthru_delete", 00:04:14.776 "bdev_passthru_create", 00:04:14.776 "bdev_lvol_set_parent_bdev", 00:04:14.776 "bdev_lvol_set_parent", 00:04:14.776 "bdev_lvol_check_shallow_copy", 00:04:14.776 "bdev_lvol_start_shallow_copy", 00:04:14.776 "bdev_lvol_grow_lvstore", 00:04:14.776 "bdev_lvol_get_lvols", 00:04:14.776 "bdev_lvol_get_lvstores", 00:04:14.776 "bdev_lvol_delete", 00:04:14.776 "bdev_lvol_set_read_only", 00:04:14.776 "bdev_lvol_resize", 00:04:14.776 "bdev_lvol_decouple_parent", 00:04:14.776 "bdev_lvol_inflate", 00:04:14.776 "bdev_lvol_rename", 00:04:14.776 "bdev_lvol_clone_bdev", 00:04:14.776 "bdev_lvol_clone", 00:04:14.776 "bdev_lvol_snapshot", 00:04:14.776 "bdev_lvol_create", 00:04:14.776 "bdev_lvol_delete_lvstore", 00:04:14.776 "bdev_lvol_rename_lvstore", 00:04:14.776 "bdev_lvol_create_lvstore", 00:04:14.776 "bdev_raid_set_options", 00:04:14.776 "bdev_raid_remove_base_bdev", 00:04:14.776 "bdev_raid_add_base_bdev", 00:04:14.776 "bdev_raid_delete", 00:04:14.776 "bdev_raid_create", 00:04:14.776 "bdev_raid_get_bdevs", 00:04:14.776 "bdev_error_inject_error", 00:04:14.776 "bdev_error_delete", 00:04:14.776 "bdev_error_create", 00:04:14.776 "bdev_split_delete", 00:04:14.776 "bdev_split_create", 00:04:14.776 "bdev_delay_delete", 00:04:14.776 "bdev_delay_create", 00:04:14.776 "bdev_delay_update_latency", 00:04:14.776 "bdev_zone_block_delete", 00:04:14.776 "bdev_zone_block_create", 00:04:14.776 "blobfs_create", 00:04:14.776 "blobfs_detect", 00:04:14.776 "blobfs_set_cache_size", 00:04:14.776 "bdev_xnvme_delete", 00:04:14.776 "bdev_xnvme_create", 00:04:14.776 "bdev_aio_delete", 00:04:14.776 "bdev_aio_rescan", 00:04:14.776 "bdev_aio_create", 00:04:14.776 "bdev_ftl_set_property", 00:04:14.776 "bdev_ftl_get_properties", 00:04:14.776 "bdev_ftl_get_stats", 00:04:14.776 "bdev_ftl_unmap", 00:04:14.776 "bdev_ftl_unload", 00:04:14.776 "bdev_ftl_delete", 00:04:14.776 "bdev_ftl_load", 00:04:14.776 "bdev_ftl_create", 00:04:14.776 "bdev_virtio_attach_controller", 00:04:14.776 "bdev_virtio_scsi_get_devices", 00:04:14.776 "bdev_virtio_detach_controller", 00:04:14.776 "bdev_virtio_blk_set_hotplug", 00:04:14.776 "bdev_iscsi_delete", 00:04:14.776 "bdev_iscsi_create", 00:04:14.776 "bdev_iscsi_set_options", 00:04:14.776 "accel_error_inject_error", 00:04:14.776 "ioat_scan_accel_module", 00:04:14.776 "dsa_scan_accel_module", 00:04:14.776 "iaa_scan_accel_module", 00:04:14.776 "keyring_file_remove_key", 00:04:14.776 "keyring_file_add_key", 00:04:14.776 "keyring_linux_set_options", 00:04:14.776 "fsdev_aio_delete", 00:04:14.776 "fsdev_aio_create", 00:04:14.776 "iscsi_get_histogram", 00:04:14.776 "iscsi_enable_histogram", 00:04:14.776 "iscsi_set_options", 00:04:14.776 "iscsi_get_auth_groups", 00:04:14.776 "iscsi_auth_group_remove_secret", 00:04:14.776 "iscsi_auth_group_add_secret", 00:04:14.776 "iscsi_delete_auth_group", 00:04:14.776 "iscsi_create_auth_group", 00:04:14.776 "iscsi_set_discovery_auth", 00:04:14.776 "iscsi_get_options", 00:04:14.776 "iscsi_target_node_request_logout", 00:04:14.776 "iscsi_target_node_set_redirect", 00:04:14.776 "iscsi_target_node_set_auth", 00:04:14.776 "iscsi_target_node_add_lun", 00:04:14.776 "iscsi_get_stats", 00:04:14.776 "iscsi_get_connections", 00:04:14.776 "iscsi_portal_group_set_auth", 00:04:14.776 "iscsi_start_portal_group", 00:04:14.776 "iscsi_delete_portal_group", 00:04:14.776 "iscsi_create_portal_group", 00:04:14.776 "iscsi_get_portal_groups", 00:04:14.776 "iscsi_delete_target_node", 00:04:14.776 "iscsi_target_node_remove_pg_ig_maps", 00:04:14.776 "iscsi_target_node_add_pg_ig_maps", 00:04:14.776 "iscsi_create_target_node", 00:04:14.776 "iscsi_get_target_nodes", 00:04:14.776 "iscsi_delete_initiator_group", 00:04:14.776 "iscsi_initiator_group_remove_initiators", 00:04:14.776 "iscsi_initiator_group_add_initiators", 00:04:14.776 "iscsi_create_initiator_group", 00:04:14.776 "iscsi_get_initiator_groups", 00:04:14.776 "nvmf_set_crdt", 00:04:14.776 "nvmf_set_config", 00:04:14.776 "nvmf_set_max_subsystems", 00:04:14.776 "nvmf_stop_mdns_prr", 00:04:14.776 "nvmf_publish_mdns_prr", 00:04:14.776 "nvmf_subsystem_get_listeners", 00:04:14.776 "nvmf_subsystem_get_qpairs", 00:04:14.776 "nvmf_subsystem_get_controllers", 00:04:14.776 "nvmf_get_stats", 00:04:14.776 "nvmf_get_transports", 00:04:14.776 "nvmf_create_transport", 00:04:14.776 "nvmf_get_targets", 00:04:14.776 "nvmf_delete_target", 00:04:14.776 "nvmf_create_target", 00:04:14.776 "nvmf_subsystem_allow_any_host", 00:04:14.776 "nvmf_subsystem_set_keys", 00:04:14.776 "nvmf_subsystem_remove_host", 00:04:14.776 "nvmf_subsystem_add_host", 00:04:14.776 "nvmf_ns_remove_host", 00:04:14.776 "nvmf_ns_add_host", 00:04:14.776 "nvmf_subsystem_remove_ns", 00:04:14.776 "nvmf_subsystem_set_ns_ana_group", 00:04:14.776 "nvmf_subsystem_add_ns", 00:04:14.776 "nvmf_subsystem_listener_set_ana_state", 00:04:14.776 "nvmf_discovery_get_referrals", 00:04:14.776 "nvmf_discovery_remove_referral", 00:04:14.776 "nvmf_discovery_add_referral", 00:04:14.776 "nvmf_subsystem_remove_listener", 00:04:14.776 "nvmf_subsystem_add_listener", 00:04:14.776 "nvmf_delete_subsystem", 00:04:14.776 "nvmf_create_subsystem", 00:04:14.776 "nvmf_get_subsystems", 00:04:14.776 "env_dpdk_get_mem_stats", 00:04:14.776 "nbd_get_disks", 00:04:14.776 "nbd_stop_disk", 00:04:14.776 "nbd_start_disk", 00:04:14.776 "ublk_recover_disk", 00:04:14.776 "ublk_get_disks", 00:04:14.776 "ublk_stop_disk", 00:04:14.776 "ublk_start_disk", 00:04:14.776 "ublk_destroy_target", 00:04:14.776 "ublk_create_target", 00:04:14.776 "virtio_blk_create_transport", 00:04:14.776 "virtio_blk_get_transports", 00:04:14.776 "vhost_controller_set_coalescing", 00:04:14.776 "vhost_get_controllers", 00:04:14.776 "vhost_delete_controller", 00:04:14.776 "vhost_create_blk_controller", 00:04:14.776 "vhost_scsi_controller_remove_target", 00:04:14.776 "vhost_scsi_controller_add_target", 00:04:14.776 "vhost_start_scsi_controller", 00:04:14.776 "vhost_create_scsi_controller", 00:04:14.776 "thread_set_cpumask", 00:04:14.776 "scheduler_set_options", 00:04:14.776 "framework_get_governor", 00:04:14.776 "framework_get_scheduler", 00:04:14.776 "framework_set_scheduler", 00:04:14.776 "framework_get_reactors", 00:04:14.776 "thread_get_io_channels", 00:04:14.776 "thread_get_pollers", 00:04:14.776 "thread_get_stats", 00:04:14.776 "framework_monitor_context_switch", 00:04:14.776 "spdk_kill_instance", 00:04:14.776 "log_enable_timestamps", 00:04:14.776 "log_get_flags", 00:04:14.776 "log_clear_flag", 00:04:14.776 "log_set_flag", 00:04:14.776 "log_get_level", 00:04:14.776 "log_set_level", 00:04:14.776 "log_get_print_level", 00:04:14.776 "log_set_print_level", 00:04:14.776 "framework_enable_cpumask_locks", 00:04:14.776 "framework_disable_cpumask_locks", 00:04:14.776 "framework_wait_init", 00:04:14.776 "framework_start_init", 00:04:14.776 "scsi_get_devices", 00:04:14.776 "bdev_get_histogram", 00:04:14.776 "bdev_enable_histogram", 00:04:14.776 "bdev_set_qos_limit", 00:04:14.776 "bdev_set_qd_sampling_period", 00:04:14.776 "bdev_get_bdevs", 00:04:14.776 "bdev_reset_iostat", 00:04:14.776 "bdev_get_iostat", 00:04:14.776 "bdev_examine", 00:04:14.776 "bdev_wait_for_examine", 00:04:14.776 "bdev_set_options", 00:04:14.776 "accel_get_stats", 00:04:14.776 "accel_set_options", 00:04:14.776 "accel_set_driver", 00:04:14.776 "accel_crypto_key_destroy", 00:04:14.776 "accel_crypto_keys_get", 00:04:14.776 "accel_crypto_key_create", 00:04:14.776 "accel_assign_opc", 00:04:14.776 "accel_get_module_info", 00:04:14.776 "accel_get_opc_assignments", 00:04:14.776 "vmd_rescan", 00:04:14.776 "vmd_remove_device", 00:04:14.776 "vmd_enable", 00:04:14.776 "sock_get_default_impl", 00:04:14.776 "sock_set_default_impl", 00:04:14.776 "sock_impl_set_options", 00:04:14.776 "sock_impl_get_options", 00:04:14.776 "iobuf_get_stats", 00:04:14.776 "iobuf_set_options", 00:04:14.776 "keyring_get_keys", 00:04:14.776 "framework_get_pci_devices", 00:04:14.776 "framework_get_config", 00:04:14.776 "framework_get_subsystems", 00:04:14.776 "fsdev_set_opts", 00:04:14.776 "fsdev_get_opts", 00:04:14.776 "trace_get_info", 00:04:14.776 "trace_get_tpoint_group_mask", 00:04:14.776 "trace_disable_tpoint_group", 00:04:14.776 "trace_enable_tpoint_group", 00:04:14.776 "trace_clear_tpoint_mask", 00:04:14.776 "trace_set_tpoint_mask", 00:04:14.776 "notify_get_notifications", 00:04:14.776 "notify_get_types", 00:04:14.776 "spdk_get_version", 00:04:14.776 "rpc_get_methods" 00:04:14.776 ] 00:04:14.776 01:17:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.777 01:17:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:14.777 01:17:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58324 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58324 ']' 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58324 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58324 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58324' 00:04:14.777 killing process with pid 58324 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58324 00:04:14.777 01:17:10 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58324 00:04:16.151 00:04:16.151 real 0m2.496s 00:04:16.151 user 0m4.308s 00:04:16.151 sys 0m0.388s 00:04:16.151 01:17:11 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.151 ************************************ 00:04:16.151 END TEST spdkcli_tcp 00:04:16.151 ************************************ 00:04:16.151 01:17:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.151 01:17:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:16.151 01:17:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.151 01:17:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.151 01:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:16.151 ************************************ 00:04:16.151 START TEST dpdk_mem_utility 00:04:16.151 ************************************ 00:04:16.151 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:16.151 * Looking for test storage... 00:04:16.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:16.151 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:16.151 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:16.151 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:16.151 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.151 01:17:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:16.151 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.151 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:16.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.151 --rc genhtml_branch_coverage=1 00:04:16.151 --rc genhtml_function_coverage=1 00:04:16.151 --rc genhtml_legend=1 00:04:16.151 --rc geninfo_all_blocks=1 00:04:16.151 --rc geninfo_unexecuted_blocks=1 00:04:16.151 00:04:16.151 ' 00:04:16.151 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:16.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.151 --rc genhtml_branch_coverage=1 00:04:16.151 --rc genhtml_function_coverage=1 00:04:16.151 --rc genhtml_legend=1 00:04:16.151 --rc geninfo_all_blocks=1 00:04:16.151 --rc geninfo_unexecuted_blocks=1 00:04:16.151 00:04:16.151 ' 00:04:16.152 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:16.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.152 --rc genhtml_branch_coverage=1 00:04:16.152 --rc genhtml_function_coverage=1 00:04:16.152 --rc genhtml_legend=1 00:04:16.152 --rc geninfo_all_blocks=1 00:04:16.152 --rc geninfo_unexecuted_blocks=1 00:04:16.152 00:04:16.152 ' 00:04:16.152 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:16.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.152 --rc genhtml_branch_coverage=1 00:04:16.152 --rc genhtml_function_coverage=1 00:04:16.152 --rc genhtml_legend=1 00:04:16.152 --rc geninfo_all_blocks=1 00:04:16.152 --rc geninfo_unexecuted_blocks=1 00:04:16.152 00:04:16.152 ' 00:04:16.152 01:17:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:16.152 01:17:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58430 00:04:16.152 01:17:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58430 00:04:16.152 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58430 ']' 00:04:16.152 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.152 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:16.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.152 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.152 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:16.152 01:17:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:16.152 01:17:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:16.152 [2024-09-28 01:17:12.056936] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:16.152 [2024-09-28 01:17:12.057037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58430 ] 00:04:16.410 [2024-09-28 01:17:12.197334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.669 [2024-09-28 01:17:12.344301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.928 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:16.928 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:16.928 01:17:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:16.928 01:17:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:16.928 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.928 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:17.188 { 00:04:17.188 "filename": "/tmp/spdk_mem_dump.txt" 00:04:17.188 } 00:04:17.188 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.188 01:17:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:17.188 DPDK memory size 866.000000 MiB in 1 heap(s) 00:04:17.188 1 heaps totaling size 866.000000 MiB 00:04:17.188 size: 866.000000 MiB heap id: 0 00:04:17.188 end heaps---------- 00:04:17.188 9 mempools totaling size 642.649841 MiB 00:04:17.188 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:17.188 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:17.188 size: 92.545471 MiB name: bdev_io_58430 00:04:17.188 size: 51.011292 MiB name: evtpool_58430 00:04:17.188 size: 50.003479 MiB name: msgpool_58430 00:04:17.188 size: 36.509338 MiB name: fsdev_io_58430 00:04:17.188 size: 21.763794 MiB name: PDU_Pool 00:04:17.188 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:17.188 size: 0.026123 MiB name: Session_Pool 00:04:17.188 end mempools------- 00:04:17.188 6 memzones totaling size 4.142822 MiB 00:04:17.188 size: 1.000366 MiB name: RG_ring_0_58430 00:04:17.188 size: 1.000366 MiB name: RG_ring_1_58430 00:04:17.188 size: 1.000366 MiB name: RG_ring_4_58430 00:04:17.188 size: 1.000366 MiB name: RG_ring_5_58430 00:04:17.188 size: 0.125366 MiB name: RG_ring_2_58430 00:04:17.188 size: 0.015991 MiB name: RG_ring_3_58430 00:04:17.188 end memzones------- 00:04:17.188 01:17:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:17.188 heap id: 0 total size: 866.000000 MiB number of busy elements: 322 number of free elements: 19 00:04:17.188 list of free elements. size: 19.911865 MiB 00:04:17.188 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:17.188 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:17.188 element at address: 0x200009600000 with size: 1.995972 MiB 00:04:17.188 element at address: 0x20000d800000 with size: 1.995972 MiB 00:04:17.188 element at address: 0x200007000000 with size: 1.991028 MiB 00:04:17.188 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:04:17.188 element at address: 0x20001c300040 with size: 0.999939 MiB 00:04:17.188 element at address: 0x20001c400000 with size: 0.999084 MiB 00:04:17.188 element at address: 0x200035000000 with size: 0.994324 MiB 00:04:17.188 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:04:17.188 element at address: 0x20001c700040 with size: 0.936401 MiB 00:04:17.188 element at address: 0x200000200000 with size: 0.831909 MiB 00:04:17.188 element at address: 0x20001de00000 with size: 0.560974 MiB 00:04:17.188 element at address: 0x200003e00000 with size: 0.490417 MiB 00:04:17.188 element at address: 0x20001c000000 with size: 0.487976 MiB 00:04:17.188 element at address: 0x20001c800000 with size: 0.485413 MiB 00:04:17.188 element at address: 0x200015e00000 with size: 0.443237 MiB 00:04:17.188 element at address: 0x20002b200000 with size: 0.390442 MiB 00:04:17.188 element at address: 0x200003a00000 with size: 0.352844 MiB 00:04:17.188 list of standard malloc elements. size: 199.289429 MiB 00:04:17.188 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:04:17.188 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:04:17.188 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:04:17.188 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:04:17.188 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:04:17.188 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:17.188 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:04:17.188 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:17.188 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:04:17.188 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:04:17.188 element at address: 0x200015dff040 with size: 0.000305 MiB 00:04:17.188 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:17.188 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003aff700 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dff180 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dff280 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dff380 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dff480 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dff580 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dff680 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dff780 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dff880 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dff980 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e71780 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e71880 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e71980 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e72080 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015e72180 with size: 0.000244 MiB 00:04:17.189 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07cec0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07cfc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07d0c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de8f9c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de8fac0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de8fbc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de8fcc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de8fdc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:04:17.189 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b264040 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:04:17.190 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:04:17.191 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:04:17.191 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:04:17.191 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:04:17.191 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:04:17.191 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:04:17.191 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:04:17.191 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:04:17.191 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:04:17.191 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:04:17.191 list of memzone associated elements. size: 646.798706 MiB 00:04:17.191 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:04:17.191 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:17.191 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:04:17.191 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:17.191 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:04:17.191 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58430_0 00:04:17.191 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:17.191 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58430_0 00:04:17.191 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:17.191 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58430_0 00:04:17.191 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:04:17.191 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58430_0 00:04:17.191 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:04:17.191 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:17.191 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:04:17.191 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:17.191 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:17.191 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58430 00:04:17.191 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:17.191 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58430 00:04:17.191 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:17.191 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58430 00:04:17.191 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:04:17.191 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:17.191 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:04:17.191 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:17.191 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:04:17.191 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:17.191 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:04:17.191 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:17.191 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:17.191 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58430 00:04:17.191 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:17.191 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58430 00:04:17.191 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:04:17.191 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58430 00:04:17.191 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:04:17.191 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58430 00:04:17.191 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:04:17.191 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58430 00:04:17.191 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:04:17.191 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58430 00:04:17.191 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:04:17.191 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:17.191 element at address: 0x200015e72280 with size: 0.500549 MiB 00:04:17.191 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:17.191 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:04:17.191 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:17.191 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:04:17.191 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58430 00:04:17.191 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:04:17.191 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:17.191 element at address: 0x20002b264140 with size: 0.023804 MiB 00:04:17.191 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:17.191 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:04:17.191 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58430 00:04:17.191 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:04:17.191 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:17.191 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:04:17.191 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58430 00:04:17.191 element at address: 0x200003aff800 with size: 0.000366 MiB 00:04:17.191 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58430 00:04:17.191 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:04:17.191 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58430 00:04:17.191 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:04:17.191 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:17.191 01:17:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:17.191 01:17:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58430 00:04:17.191 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58430 ']' 00:04:17.191 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58430 00:04:17.191 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:17.191 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:17.191 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58430 00:04:17.191 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:17.191 killing process with pid 58430 00:04:17.191 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:17.191 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58430' 00:04:17.191 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58430 00:04:17.191 01:17:12 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58430 00:04:18.566 00:04:18.566 real 0m2.382s 00:04:18.566 user 0m2.381s 00:04:18.566 sys 0m0.365s 00:04:18.566 01:17:14 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.566 01:17:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:18.566 ************************************ 00:04:18.566 END TEST dpdk_mem_utility 00:04:18.566 ************************************ 00:04:18.566 01:17:14 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:18.566 01:17:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.566 01:17:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.566 01:17:14 -- common/autotest_common.sh@10 -- # set +x 00:04:18.566 ************************************ 00:04:18.566 START TEST event 00:04:18.566 ************************************ 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:18.566 * Looking for test storage... 00:04:18.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:18.566 01:17:14 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.566 01:17:14 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.566 01:17:14 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.566 01:17:14 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.566 01:17:14 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.566 01:17:14 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.566 01:17:14 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.566 01:17:14 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.566 01:17:14 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.566 01:17:14 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.566 01:17:14 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.566 01:17:14 event -- scripts/common.sh@344 -- # case "$op" in 00:04:18.566 01:17:14 event -- scripts/common.sh@345 -- # : 1 00:04:18.566 01:17:14 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.566 01:17:14 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.566 01:17:14 event -- scripts/common.sh@365 -- # decimal 1 00:04:18.566 01:17:14 event -- scripts/common.sh@353 -- # local d=1 00:04:18.566 01:17:14 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.566 01:17:14 event -- scripts/common.sh@355 -- # echo 1 00:04:18.566 01:17:14 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.566 01:17:14 event -- scripts/common.sh@366 -- # decimal 2 00:04:18.566 01:17:14 event -- scripts/common.sh@353 -- # local d=2 00:04:18.566 01:17:14 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.566 01:17:14 event -- scripts/common.sh@355 -- # echo 2 00:04:18.566 01:17:14 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.566 01:17:14 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.566 01:17:14 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.566 01:17:14 event -- scripts/common.sh@368 -- # return 0 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.566 --rc genhtml_branch_coverage=1 00:04:18.566 --rc genhtml_function_coverage=1 00:04:18.566 --rc genhtml_legend=1 00:04:18.566 --rc geninfo_all_blocks=1 00:04:18.566 --rc geninfo_unexecuted_blocks=1 00:04:18.566 00:04:18.566 ' 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.566 --rc genhtml_branch_coverage=1 00:04:18.566 --rc genhtml_function_coverage=1 00:04:18.566 --rc genhtml_legend=1 00:04:18.566 --rc geninfo_all_blocks=1 00:04:18.566 --rc geninfo_unexecuted_blocks=1 00:04:18.566 00:04:18.566 ' 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.566 --rc genhtml_branch_coverage=1 00:04:18.566 --rc genhtml_function_coverage=1 00:04:18.566 --rc genhtml_legend=1 00:04:18.566 --rc geninfo_all_blocks=1 00:04:18.566 --rc geninfo_unexecuted_blocks=1 00:04:18.566 00:04:18.566 ' 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.566 --rc genhtml_branch_coverage=1 00:04:18.566 --rc genhtml_function_coverage=1 00:04:18.566 --rc genhtml_legend=1 00:04:18.566 --rc geninfo_all_blocks=1 00:04:18.566 --rc geninfo_unexecuted_blocks=1 00:04:18.566 00:04:18.566 ' 00:04:18.566 01:17:14 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:18.566 01:17:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:18.566 01:17:14 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:18.566 01:17:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.566 01:17:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.566 ************************************ 00:04:18.566 START TEST event_perf 00:04:18.566 ************************************ 00:04:18.566 01:17:14 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:18.566 Running I/O for 1 seconds...[2024-09-28 01:17:14.458207] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:18.566 [2024-09-28 01:17:14.458365] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58516 ] 00:04:18.825 [2024-09-28 01:17:14.599692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:18.825 [2024-09-28 01:17:14.747267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.825 [2024-09-28 01:17:14.747301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:18.825 [2024-09-28 01:17:14.747372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.825 Running I/O for 1 seconds...[2024-09-28 01:17:14.747406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.199 00:04:20.199 lcore 0: 195711 00:04:20.199 lcore 1: 195714 00:04:20.199 lcore 2: 195713 00:04:20.199 lcore 3: 195714 00:04:20.199 done. 00:04:20.199 00:04:20.199 real 0m1.519s 00:04:20.199 user 0m4.328s 00:04:20.199 sys 0m0.074s 00:04:20.199 01:17:15 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.199 01:17:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:20.199 ************************************ 00:04:20.199 END TEST event_perf 00:04:20.199 ************************************ 00:04:20.199 01:17:15 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:20.199 01:17:15 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:20.199 01:17:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.199 01:17:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.199 ************************************ 00:04:20.199 START TEST event_reactor 00:04:20.199 ************************************ 00:04:20.199 01:17:15 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:20.199 [2024-09-28 01:17:16.018589] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:20.199 [2024-09-28 01:17:16.018670] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58561 ] 00:04:20.457 [2024-09-28 01:17:16.161753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.457 [2024-09-28 01:17:16.301211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.864 test_start 00:04:21.864 oneshot 00:04:21.864 tick 100 00:04:21.864 tick 100 00:04:21.864 tick 250 00:04:21.864 tick 100 00:04:21.864 tick 100 00:04:21.864 tick 100 00:04:21.864 tick 250 00:04:21.864 tick 500 00:04:21.864 tick 100 00:04:21.864 tick 100 00:04:21.864 tick 250 00:04:21.864 tick 100 00:04:21.864 tick 100 00:04:21.864 test_end 00:04:21.864 ************************************ 00:04:21.864 END TEST event_reactor 00:04:21.864 ************************************ 00:04:21.864 00:04:21.864 real 0m1.514s 00:04:21.864 user 0m1.342s 00:04:21.864 sys 0m0.065s 00:04:21.864 01:17:17 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.864 01:17:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:21.864 01:17:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:21.864 01:17:17 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:21.864 01:17:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.864 01:17:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.864 ************************************ 00:04:21.864 START TEST event_reactor_perf 00:04:21.864 ************************************ 00:04:21.864 01:17:17 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:21.864 [2024-09-28 01:17:17.580123] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:21.864 [2024-09-28 01:17:17.580300] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58592 ] 00:04:21.864 [2024-09-28 01:17:17.722720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.123 [2024-09-28 01:17:17.863354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.497 test_start 00:04:23.497 test_end 00:04:23.497 Performance: 409735 events per second 00:04:23.497 00:04:23.497 real 0m1.512s 00:04:23.497 user 0m1.340s 00:04:23.497 sys 0m0.064s 00:04:23.497 ************************************ 00:04:23.497 END TEST event_reactor_perf 00:04:23.497 ************************************ 00:04:23.497 01:17:19 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.497 01:17:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:23.497 01:17:19 event -- event/event.sh@49 -- # uname -s 00:04:23.497 01:17:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:23.497 01:17:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:23.497 01:17:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.497 01:17:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.497 01:17:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.497 ************************************ 00:04:23.497 START TEST event_scheduler 00:04:23.497 ************************************ 00:04:23.497 01:17:19 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:23.497 * Looking for test storage... 00:04:23.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:23.497 01:17:19 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:23.497 01:17:19 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:23.497 01:17:19 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:04:23.497 01:17:19 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:23.497 01:17:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.497 01:17:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.497 01:17:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.497 01:17:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.497 01:17:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.497 01:17:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.498 01:17:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:23.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.498 --rc genhtml_branch_coverage=1 00:04:23.498 --rc genhtml_function_coverage=1 00:04:23.498 --rc genhtml_legend=1 00:04:23.498 --rc geninfo_all_blocks=1 00:04:23.498 --rc geninfo_unexecuted_blocks=1 00:04:23.498 00:04:23.498 ' 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:23.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.498 --rc genhtml_branch_coverage=1 00:04:23.498 --rc genhtml_function_coverage=1 00:04:23.498 --rc genhtml_legend=1 00:04:23.498 --rc geninfo_all_blocks=1 00:04:23.498 --rc geninfo_unexecuted_blocks=1 00:04:23.498 00:04:23.498 ' 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:23.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.498 --rc genhtml_branch_coverage=1 00:04:23.498 --rc genhtml_function_coverage=1 00:04:23.498 --rc genhtml_legend=1 00:04:23.498 --rc geninfo_all_blocks=1 00:04:23.498 --rc geninfo_unexecuted_blocks=1 00:04:23.498 00:04:23.498 ' 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:23.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.498 --rc genhtml_branch_coverage=1 00:04:23.498 --rc genhtml_function_coverage=1 00:04:23.498 --rc genhtml_legend=1 00:04:23.498 --rc geninfo_all_blocks=1 00:04:23.498 --rc geninfo_unexecuted_blocks=1 00:04:23.498 00:04:23.498 ' 00:04:23.498 01:17:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:23.498 01:17:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58668 00:04:23.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.498 01:17:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.498 01:17:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:23.498 01:17:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58668 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58668 ']' 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.498 01:17:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:23.498 [2024-09-28 01:17:19.329765] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:23.498 [2024-09-28 01:17:19.329889] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58668 ] 00:04:23.757 [2024-09-28 01:17:19.481513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:23.757 [2024-09-28 01:17:19.661999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.757 [2024-09-28 01:17:19.662297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.757 [2024-09-28 01:17:19.662546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:23.757 [2024-09-28 01:17:19.662567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:24.323 01:17:20 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.323 01:17:20 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:24.323 01:17:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:24.323 01:17:20 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.323 01:17:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.323 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:24.323 POWER: Cannot set governor of lcore 0 to userspace 00:04:24.323 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:24.323 POWER: Cannot set governor of lcore 0 to performance 00:04:24.323 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:24.323 POWER: Cannot set governor of lcore 0 to userspace 00:04:24.323 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:24.323 POWER: Cannot set governor of lcore 0 to userspace 00:04:24.323 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:24.323 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:24.323 POWER: Unable to set Power Management Environment for lcore 0 00:04:24.323 [2024-09-28 01:17:20.171917] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:24.323 [2024-09-28 01:17:20.171944] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:24.323 [2024-09-28 01:17:20.171953] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:24.323 [2024-09-28 01:17:20.171969] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:24.323 [2024-09-28 01:17:20.171977] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:24.323 [2024-09-28 01:17:20.171986] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:24.323 01:17:20 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.323 01:17:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:24.323 01:17:20 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.323 01:17:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.583 [2024-09-28 01:17:20.391290] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:24.583 01:17:20 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.583 01:17:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:24.583 01:17:20 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.583 01:17:20 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.583 01:17:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 ************************************ 00:04:24.584 START TEST scheduler_create_thread 00:04:24.584 ************************************ 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 2 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 3 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 4 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 5 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 6 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 7 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 8 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 9 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 10 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.584 01:17:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.517 01:17:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.517 01:17:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:25.517 01:17:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.517 01:17:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.890 01:17:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.890 01:17:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:26.891 01:17:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:26.891 01:17:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.891 01:17:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.288 ************************************ 00:04:28.288 END TEST scheduler_create_thread 00:04:28.288 ************************************ 00:04:28.288 01:17:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.288 00:04:28.288 real 0m3.378s 00:04:28.288 user 0m0.016s 00:04:28.288 sys 0m0.005s 00:04:28.288 01:17:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.288 01:17:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.288 01:17:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:28.288 01:17:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58668 00:04:28.288 01:17:23 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58668 ']' 00:04:28.288 01:17:23 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58668 00:04:28.288 01:17:23 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:28.288 01:17:23 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.288 01:17:23 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58668 00:04:28.288 killing process with pid 58668 00:04:28.288 01:17:23 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:28.288 01:17:23 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:28.288 01:17:23 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58668' 00:04:28.288 01:17:23 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58668 00:04:28.288 01:17:23 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58668 00:04:28.288 [2024-09-28 01:17:24.157801] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:29.223 00:04:29.223 real 0m5.711s 00:04:29.223 user 0m11.317s 00:04:29.223 sys 0m0.328s 00:04:29.223 01:17:24 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.223 ************************************ 00:04:29.223 END TEST event_scheduler 00:04:29.223 ************************************ 00:04:29.223 01:17:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:29.223 01:17:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:29.223 01:17:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:29.223 01:17:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.223 01:17:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.223 01:17:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.223 ************************************ 00:04:29.223 START TEST app_repeat 00:04:29.223 ************************************ 00:04:29.223 01:17:24 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:29.223 Process app_repeat pid: 58779 00:04:29.223 spdk_app_start Round 0 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58779 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58779' 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58779 /var/tmp/spdk-nbd.sock 00:04:29.223 01:17:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58779 ']' 00:04:29.223 01:17:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.223 01:17:24 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:29.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.223 01:17:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:29.223 01:17:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.223 01:17:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:29.223 01:17:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.223 [2024-09-28 01:17:24.944682] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:29.223 [2024-09-28 01:17:24.944791] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58779 ] 00:04:29.223 [2024-09-28 01:17:25.093874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.484 [2024-09-28 01:17:25.277291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.484 [2024-09-28 01:17:25.277301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.053 01:17:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.053 01:17:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:30.053 01:17:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.313 Malloc0 00:04:30.313 01:17:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.573 Malloc1 00:04:30.573 01:17:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.573 01:17:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.834 /dev/nbd0 00:04:30.834 01:17:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.834 01:17:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.834 1+0 records in 00:04:30.834 1+0 records out 00:04:30.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587854 s, 7.0 MB/s 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:30.834 01:17:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:30.834 01:17:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.834 01:17:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.834 01:17:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.094 /dev/nbd1 00:04:31.094 01:17:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.094 01:17:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.094 1+0 records in 00:04:31.094 1+0 records out 00:04:31.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293621 s, 13.9 MB/s 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:31.094 01:17:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:31.094 01:17:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.094 01:17:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.094 01:17:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.094 01:17:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.094 01:17:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.094 01:17:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.094 { 00:04:31.094 "nbd_device": "/dev/nbd0", 00:04:31.094 "bdev_name": "Malloc0" 00:04:31.094 }, 00:04:31.094 { 00:04:31.094 "nbd_device": "/dev/nbd1", 00:04:31.094 "bdev_name": "Malloc1" 00:04:31.094 } 00:04:31.094 ]' 00:04:31.094 01:17:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.094 { 00:04:31.094 "nbd_device": "/dev/nbd0", 00:04:31.094 "bdev_name": "Malloc0" 00:04:31.094 }, 00:04:31.094 { 00:04:31.094 "nbd_device": "/dev/nbd1", 00:04:31.094 "bdev_name": "Malloc1" 00:04:31.094 } 00:04:31.094 ]' 00:04:31.094 01:17:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.353 /dev/nbd1' 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.353 /dev/nbd1' 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.353 256+0 records in 00:04:31.353 256+0 records out 00:04:31.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00676613 s, 155 MB/s 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.353 256+0 records in 00:04:31.353 256+0 records out 00:04:31.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201136 s, 52.1 MB/s 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.353 256+0 records in 00:04:31.353 256+0 records out 00:04:31.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194532 s, 53.9 MB/s 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.353 01:17:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.612 01:17:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.612 01:17:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.612 01:17:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.612 01:17:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.612 01:17:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.612 01:17:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.612 01:17:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.612 01:17:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.612 01:17:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.612 01:17:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.870 01:17:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.128 01:17:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.128 01:17:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.128 01:17:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:32.128 01:17:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.128 01:17:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.128 01:17:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.129 01:17:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.129 01:17:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.129 01:17:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.387 01:17:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:32.952 [2024-09-28 01:17:28.804117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.211 [2024-09-28 01:17:28.937707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.211 [2024-09-28 01:17:28.937916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.211 [2024-09-28 01:17:29.040561] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.211 [2024-09-28 01:17:29.040709] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.736 spdk_app_start Round 1 00:04:35.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.736 01:17:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:35.737 01:17:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:35.737 01:17:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58779 /var/tmp/spdk-nbd.sock 00:04:35.737 01:17:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58779 ']' 00:04:35.737 01:17:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.737 01:17:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.737 01:17:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.737 01:17:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.737 01:17:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.737 01:17:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.737 01:17:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:35.737 01:17:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.737 Malloc0 00:04:35.737 01:17:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.995 Malloc1 00:04:35.995 01:17:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.995 01:17:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:35.995 /dev/nbd0 00:04:36.253 01:17:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:36.253 01:17:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.253 1+0 records in 00:04:36.253 1+0 records out 00:04:36.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281839 s, 14.5 MB/s 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:36.253 01:17:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:36.253 01:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.253 01:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.254 01:17:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.254 /dev/nbd1 00:04:36.254 01:17:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.254 01:17:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.254 1+0 records in 00:04:36.254 1+0 records out 00:04:36.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182576 s, 22.4 MB/s 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:36.254 01:17:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:36.511 01:17:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:36.511 01:17:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:36.511 01:17:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.511 01:17:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.511 01:17:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.511 01:17:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.511 01:17:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.511 01:17:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.511 { 00:04:36.511 "nbd_device": "/dev/nbd0", 00:04:36.511 "bdev_name": "Malloc0" 00:04:36.511 }, 00:04:36.511 { 00:04:36.511 "nbd_device": "/dev/nbd1", 00:04:36.511 "bdev_name": "Malloc1" 00:04:36.511 } 00:04:36.512 ]' 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.512 { 00:04:36.512 "nbd_device": "/dev/nbd0", 00:04:36.512 "bdev_name": "Malloc0" 00:04:36.512 }, 00:04:36.512 { 00:04:36.512 "nbd_device": "/dev/nbd1", 00:04:36.512 "bdev_name": "Malloc1" 00:04:36.512 } 00:04:36.512 ]' 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.512 /dev/nbd1' 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.512 /dev/nbd1' 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.512 256+0 records in 00:04:36.512 256+0 records out 00:04:36.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665449 s, 158 MB/s 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.512 01:17:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.770 256+0 records in 00:04:36.770 256+0 records out 00:04:36.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181518 s, 57.8 MB/s 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.770 256+0 records in 00:04:36.770 256+0 records out 00:04:36.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181803 s, 57.7 MB/s 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:36.770 01:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.028 01:17:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:37.286 01:17:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:37.286 01:17:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:37.556 01:17:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:38.170 [2024-09-28 01:17:34.064768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.428 [2024-09-28 01:17:34.198995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.428 [2024-09-28 01:17:34.199146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.428 [2024-09-28 01:17:34.301978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.428 [2024-09-28 01:17:34.302118] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:40.956 spdk_app_start Round 2 00:04:40.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.956 01:17:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:40.956 01:17:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:40.956 01:17:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58779 /var/tmp/spdk-nbd.sock 00:04:40.956 01:17:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58779 ']' 00:04:40.956 01:17:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.956 01:17:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.956 01:17:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.956 01:17:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.956 01:17:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.956 01:17:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.956 01:17:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:40.956 01:17:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.956 Malloc0 00:04:40.956 01:17:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.215 Malloc1 00:04:41.215 01:17:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.215 01:17:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:41.473 /dev/nbd0 00:04:41.473 01:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:41.473 01:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.473 1+0 records in 00:04:41.473 1+0 records out 00:04:41.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561411 s, 7.3 MB/s 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:41.473 01:17:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:41.473 01:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.473 01:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.473 01:17:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.730 /dev/nbd1 00:04:41.730 01:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.730 01:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.730 1+0 records in 00:04:41.730 1+0 records out 00:04:41.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191344 s, 21.4 MB/s 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.730 01:17:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:41.731 01:17:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:41.731 01:17:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:41.731 01:17:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:41.731 01:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.731 01:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.731 01:17:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.731 01:17:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.731 01:17:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.988 { 00:04:41.988 "nbd_device": "/dev/nbd0", 00:04:41.988 "bdev_name": "Malloc0" 00:04:41.988 }, 00:04:41.988 { 00:04:41.988 "nbd_device": "/dev/nbd1", 00:04:41.988 "bdev_name": "Malloc1" 00:04:41.988 } 00:04:41.988 ]' 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.988 { 00:04:41.988 "nbd_device": "/dev/nbd0", 00:04:41.988 "bdev_name": "Malloc0" 00:04:41.988 }, 00:04:41.988 { 00:04:41.988 "nbd_device": "/dev/nbd1", 00:04:41.988 "bdev_name": "Malloc1" 00:04:41.988 } 00:04:41.988 ]' 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.988 /dev/nbd1' 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.988 /dev/nbd1' 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.988 01:17:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.989 256+0 records in 00:04:41.989 256+0 records out 00:04:41.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489344 s, 214 MB/s 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.989 256+0 records in 00:04:41.989 256+0 records out 00:04:41.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015969 s, 65.7 MB/s 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.989 256+0 records in 00:04:41.989 256+0 records out 00:04:41.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171602 s, 61.1 MB/s 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.989 01:17:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:42.247 01:17:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:42.247 01:17:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:42.247 01:17:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:42.247 01:17:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.247 01:17:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.247 01:17:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:42.247 01:17:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.247 01:17:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.247 01:17:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.247 01:17:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.522 01:17:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.781 01:17:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.781 01:17:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:43.040 01:17:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:43.606 [2024-09-28 01:17:39.423204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.865 [2024-09-28 01:17:39.557392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.865 [2024-09-28 01:17:39.557560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.865 [2024-09-28 01:17:39.657672] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:43.865 [2024-09-28 01:17:39.657729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:46.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.397 01:17:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58779 /var/tmp/spdk-nbd.sock 00:04:46.397 01:17:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58779 ']' 00:04:46.397 01:17:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.397 01:17:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.397 01:17:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.397 01:17:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.397 01:17:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:46.397 01:17:42 event.app_repeat -- event/event.sh@39 -- # killprocess 58779 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58779 ']' 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58779 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58779 00:04:46.397 killing process with pid 58779 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58779' 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58779 00:04:46.397 01:17:42 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58779 00:04:46.723 spdk_app_start is called in Round 0. 00:04:46.723 Shutdown signal received, stop current app iteration 00:04:46.723 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:04:46.723 spdk_app_start is called in Round 1. 00:04:46.723 Shutdown signal received, stop current app iteration 00:04:46.723 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:04:46.723 spdk_app_start is called in Round 2. 00:04:46.723 Shutdown signal received, stop current app iteration 00:04:46.723 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:04:46.723 spdk_app_start is called in Round 3. 00:04:46.723 Shutdown signal received, stop current app iteration 00:04:46.723 01:17:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:46.723 01:17:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:46.723 00:04:46.723 real 0m17.716s 00:04:46.723 user 0m38.199s 00:04:46.723 sys 0m2.083s 00:04:46.723 ************************************ 00:04:46.723 END TEST app_repeat 00:04:46.723 ************************************ 00:04:46.723 01:17:42 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.723 01:17:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.993 01:17:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:46.993 01:17:42 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:46.993 01:17:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.993 01:17:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.993 01:17:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.993 ************************************ 00:04:46.993 START TEST cpu_locks 00:04:46.993 ************************************ 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:46.993 * Looking for test storage... 00:04:46.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.993 01:17:42 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.993 --rc genhtml_branch_coverage=1 00:04:46.993 --rc genhtml_function_coverage=1 00:04:46.993 --rc genhtml_legend=1 00:04:46.993 --rc geninfo_all_blocks=1 00:04:46.993 --rc geninfo_unexecuted_blocks=1 00:04:46.993 00:04:46.993 ' 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.993 --rc genhtml_branch_coverage=1 00:04:46.993 --rc genhtml_function_coverage=1 00:04:46.993 --rc genhtml_legend=1 00:04:46.993 --rc geninfo_all_blocks=1 00:04:46.993 --rc geninfo_unexecuted_blocks=1 00:04:46.993 00:04:46.993 ' 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.993 --rc genhtml_branch_coverage=1 00:04:46.993 --rc genhtml_function_coverage=1 00:04:46.993 --rc genhtml_legend=1 00:04:46.993 --rc geninfo_all_blocks=1 00:04:46.993 --rc geninfo_unexecuted_blocks=1 00:04:46.993 00:04:46.993 ' 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.993 --rc genhtml_branch_coverage=1 00:04:46.993 --rc genhtml_function_coverage=1 00:04:46.993 --rc genhtml_legend=1 00:04:46.993 --rc geninfo_all_blocks=1 00:04:46.993 --rc geninfo_unexecuted_blocks=1 00:04:46.993 00:04:46.993 ' 00:04:46.993 01:17:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:46.993 01:17:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:46.993 01:17:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:46.993 01:17:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.993 01:17:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.993 ************************************ 00:04:46.993 START TEST default_locks 00:04:46.993 ************************************ 00:04:46.993 01:17:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:46.993 01:17:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59211 00:04:46.993 01:17:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59211 00:04:46.993 01:17:42 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59211 ']' 00:04:46.993 01:17:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.993 01:17:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.993 01:17:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.993 01:17:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.993 01:17:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.993 01:17:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.993 [2024-09-28 01:17:42.891323] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:46.994 [2024-09-28 01:17:42.891962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59211 ] 00:04:47.253 [2024-09-28 01:17:43.039347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.253 [2024-09-28 01:17:43.180138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.819 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.819 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:47.819 01:17:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59211 00:04:47.819 01:17:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59211 00:04:47.819 01:17:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59211 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59211 ']' 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59211 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59211 00:04:48.077 killing process with pid 59211 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59211' 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59211 00:04:48.077 01:17:43 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59211 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59211 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59211 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59211 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59211 ']' 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.451 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.451 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59211) - No such process 00:04:49.451 ERROR: process (pid: 59211) is no longer running 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:49.452 00:04:49.452 real 0m2.413s 00:04:49.452 user 0m2.415s 00:04:49.452 sys 0m0.450s 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.452 01:17:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.452 ************************************ 00:04:49.452 END TEST default_locks 00:04:49.452 ************************************ 00:04:49.452 01:17:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:49.452 01:17:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.452 01:17:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.452 01:17:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.452 ************************************ 00:04:49.452 START TEST default_locks_via_rpc 00:04:49.452 ************************************ 00:04:49.452 01:17:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:49.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.452 01:17:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59264 00:04:49.452 01:17:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59264 00:04:49.452 01:17:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59264 ']' 00:04:49.452 01:17:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.452 01:17:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.452 01:17:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.452 01:17:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.452 01:17:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.452 01:17:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.452 [2024-09-28 01:17:45.332067] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:49.452 [2024-09-28 01:17:45.332162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59264 ] 00:04:49.709 [2024-09-28 01:17:45.468726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.709 [2024-09-28 01:17:45.613709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59264 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59264 00:04:50.275 01:17:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.533 01:17:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59264 00:04:50.533 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59264 ']' 00:04:50.533 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59264 00:04:50.533 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:50.533 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.533 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59264 00:04:50.533 killing process with pid 59264 00:04:50.533 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.534 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.534 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59264' 00:04:50.534 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59264 00:04:50.534 01:17:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59264 00:04:51.909 00:04:51.909 real 0m2.385s 00:04:51.909 user 0m2.403s 00:04:51.909 sys 0m0.431s 00:04:51.909 01:17:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.909 ************************************ 00:04:51.909 END TEST default_locks_via_rpc 00:04:51.909 ************************************ 00:04:51.909 01:17:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.909 01:17:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:51.909 01:17:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.909 01:17:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.909 01:17:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.909 ************************************ 00:04:51.909 START TEST non_locking_app_on_locked_coremask 00:04:51.909 ************************************ 00:04:51.909 01:17:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:51.909 01:17:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59327 00:04:51.909 01:17:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59327 /var/tmp/spdk.sock 00:04:51.909 01:17:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59327 ']' 00:04:51.909 01:17:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.909 01:17:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.909 01:17:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.910 01:17:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.910 01:17:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.910 01:17:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.910 [2024-09-28 01:17:47.777786] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:51.910 [2024-09-28 01:17:47.777883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59327 ] 00:04:52.169 [2024-09-28 01:17:47.914795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.169 [2024-09-28 01:17:48.060840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59342 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59342 /var/tmp/spdk2.sock 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59342 ']' 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.733 01:17:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:52.991 [2024-09-28 01:17:48.721383] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:52.991 [2024-09-28 01:17:48.721546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59342 ] 00:04:52.991 [2024-09-28 01:17:48.888161] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.991 [2024-09-28 01:17:48.888208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.248 [2024-09-28 01:17:49.176945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.181 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.181 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:54.181 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59327 00:04:54.181 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59327 00:04:54.181 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59327 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59327 ']' 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59327 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59327 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.749 killing process with pid 59327 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59327' 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59327 00:04:54.749 01:17:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59327 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59342 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59342 ']' 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59342 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59342 00:04:57.280 killing process with pid 59342 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59342' 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59342 00:04:57.280 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59342 00:04:58.657 ************************************ 00:04:58.657 END TEST non_locking_app_on_locked_coremask 00:04:58.657 ************************************ 00:04:58.657 00:04:58.657 real 0m6.733s 00:04:58.657 user 0m7.070s 00:04:58.657 sys 0m0.947s 00:04:58.657 01:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.657 01:17:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.657 01:17:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:58.657 01:17:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.657 01:17:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.657 01:17:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.657 ************************************ 00:04:58.657 START TEST locking_app_on_unlocked_coremask 00:04:58.657 ************************************ 00:04:58.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.657 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:58.657 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59434 00:04:58.657 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59434 /var/tmp/spdk.sock 00:04:58.657 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59434 ']' 00:04:58.657 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.657 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.657 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.657 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:58.657 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.657 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.657 [2024-09-28 01:17:54.585675] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:58.657 [2024-09-28 01:17:54.585798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59434 ] 00:04:58.916 [2024-09-28 01:17:54.730479] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.916 [2024-09-28 01:17:54.730519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.175 [2024-09-28 01:17:54.877132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59450 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59450 /var/tmp/spdk2.sock 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59450 ']' 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.741 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.741 [2024-09-28 01:17:55.481212] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:59.741 [2024-09-28 01:17:55.481530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59450 ] 00:04:59.741 [2024-09-28 01:17:55.627513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.999 [2024-09-28 01:17:55.920962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.933 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.933 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:00.933 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59450 00:05:00.933 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.933 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59450 00:05:01.500 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59434 00:05:01.500 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59434 ']' 00:05:01.500 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59434 00:05:01.500 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:01.501 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.501 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59434 00:05:01.501 killing process with pid 59434 00:05:01.501 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.501 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.501 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59434' 00:05:01.501 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59434 00:05:01.501 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59434 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59450 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59450 ']' 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59450 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59450 00:05:04.031 killing process with pid 59450 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59450' 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59450 00:05:04.031 01:17:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59450 00:05:05.407 00:05:05.407 real 0m6.504s 00:05:05.407 user 0m6.760s 00:05:05.407 sys 0m0.863s 00:05:05.407 01:18:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.407 ************************************ 00:05:05.407 END TEST locking_app_on_unlocked_coremask 00:05:05.407 01:18:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.407 ************************************ 00:05:05.407 01:18:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:05.407 01:18:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.407 01:18:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.407 01:18:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.407 ************************************ 00:05:05.407 START TEST locking_app_on_locked_coremask 00:05:05.407 ************************************ 00:05:05.407 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:05.407 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59551 00:05:05.407 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59551 /var/tmp/spdk.sock 00:05:05.407 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59551 ']' 00:05:05.407 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.407 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.408 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.408 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.408 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.408 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.408 [2024-09-28 01:18:01.129314] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:05.408 [2024-09-28 01:18:01.129434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59551 ] 00:05:05.408 [2024-09-28 01:18:01.278619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.667 [2024-09-28 01:18:01.420078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.233 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59563 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59563 /var/tmp/spdk2.sock 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59563 /var/tmp/spdk2.sock 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:06.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59563 /var/tmp/spdk2.sock 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59563 ']' 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.234 01:18:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.234 [2024-09-28 01:18:01.994832] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:06.234 [2024-09-28 01:18:01.994951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59563 ] 00:05:06.234 [2024-09-28 01:18:02.143007] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59551 has claimed it. 00:05:06.234 [2024-09-28 01:18:02.143055] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:06.872 ERROR: process (pid: 59563) is no longer running 00:05:06.872 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59563) - No such process 00:05:06.872 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.872 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:06.872 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:06.872 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:06.872 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:06.872 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:06.872 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59551 00:05:06.872 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59551 00:05:06.872 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59551 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59551 ']' 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59551 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59551 00:05:07.131 killing process with pid 59551 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59551' 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59551 00:05:07.131 01:18:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59551 00:05:08.508 00:05:08.508 real 0m3.100s 00:05:08.508 user 0m3.296s 00:05:08.508 sys 0m0.552s 00:05:08.508 01:18:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.508 01:18:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.508 ************************************ 00:05:08.508 END TEST locking_app_on_locked_coremask 00:05:08.508 ************************************ 00:05:08.508 01:18:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:08.508 01:18:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.508 01:18:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.508 01:18:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.508 ************************************ 00:05:08.508 START TEST locking_overlapped_coremask 00:05:08.508 ************************************ 00:05:08.508 01:18:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:08.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.508 01:18:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59621 00:05:08.508 01:18:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59621 /var/tmp/spdk.sock 00:05:08.508 01:18:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59621 ']' 00:05:08.508 01:18:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.509 01:18:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:08.509 01:18:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.509 01:18:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.509 01:18:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.509 01:18:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.509 [2024-09-28 01:18:04.284753] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:08.509 [2024-09-28 01:18:04.284876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59621 ] 00:05:08.509 [2024-09-28 01:18:04.431954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:08.767 [2024-09-28 01:18:04.575975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.767 [2024-09-28 01:18:04.576286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.767 [2024-09-28 01:18:04.576352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59634 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59634 /var/tmp/spdk2.sock 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59634 /var/tmp/spdk2.sock 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:09.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59634 /var/tmp/spdk2.sock 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59634 ']' 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.332 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.332 [2024-09-28 01:18:05.182607] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:09.332 [2024-09-28 01:18:05.182721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59634 ] 00:05:09.590 [2024-09-28 01:18:05.337145] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59621 has claimed it. 00:05:09.590 [2024-09-28 01:18:05.337207] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.176 ERROR: process (pid: 59634) is no longer running 00:05:10.176 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59634) - No such process 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59621 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59621 ']' 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59621 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59621 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59621' 00:05:10.176 killing process with pid 59621 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59621 00:05:10.176 01:18:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59621 00:05:11.554 00:05:11.554 real 0m2.880s 00:05:11.554 user 0m7.591s 00:05:11.554 sys 0m0.423s 00:05:11.554 01:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.555 ************************************ 00:05:11.555 END TEST locking_overlapped_coremask 00:05:11.555 ************************************ 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.555 01:18:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:11.555 01:18:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.555 01:18:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.555 01:18:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.555 ************************************ 00:05:11.555 START TEST locking_overlapped_coremask_via_rpc 00:05:11.555 ************************************ 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:11.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59687 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59687 /var/tmp/spdk.sock 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59687 ']' 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.555 01:18:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:11.555 [2024-09-28 01:18:07.217701] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:11.555 [2024-09-28 01:18:07.217971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59687 ] 00:05:11.555 [2024-09-28 01:18:07.366841] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.555 [2024-09-28 01:18:07.366875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.812 [2024-09-28 01:18:07.513578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.812 [2024-09-28 01:18:07.513872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.812 [2024-09-28 01:18:07.513891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.376 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.376 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:12.376 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59705 00:05:12.376 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59705 /var/tmp/spdk2.sock 00:05:12.376 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:12.376 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59705 ']' 00:05:12.377 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.377 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.377 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.377 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.377 01:18:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.377 [2024-09-28 01:18:08.121366] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:12.377 [2024-09-28 01:18:08.121637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59705 ] 00:05:12.377 [2024-09-28 01:18:08.277184] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.377 [2024-09-28 01:18:08.277233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.943 [2024-09-28 01:18:08.638011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.943 [2024-09-28 01:18:08.638142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.943 [2024-09-28 01:18:08.638160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.876 [2024-09-28 01:18:09.640330] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59687 has claimed it. 00:05:13.876 request: 00:05:13.876 { 00:05:13.876 "method": "framework_enable_cpumask_locks", 00:05:13.876 "req_id": 1 00:05:13.876 } 00:05:13.876 Got JSON-RPC error response 00:05:13.876 response: 00:05:13.876 { 00:05:13.876 "code": -32603, 00:05:13.876 "message": "Failed to claim CPU core: 2" 00:05:13.876 } 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59687 /var/tmp/spdk.sock 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59687 ']' 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.876 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.134 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.134 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:14.134 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59705 /var/tmp/spdk2.sock 00:05:14.134 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59705 ']' 00:05:14.134 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.134 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.134 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.134 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.134 01:18:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.134 ************************************ 00:05:14.134 END TEST locking_overlapped_coremask_via_rpc 00:05:14.134 ************************************ 00:05:14.134 01:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.134 01:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:14.134 01:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:14.134 01:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.134 01:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.134 01:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.134 00:05:14.134 real 0m2.912s 00:05:14.134 user 0m1.043s 00:05:14.134 sys 0m0.133s 00:05:14.134 01:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.134 01:18:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.392 01:18:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:14.392 01:18:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59687 ]] 00:05:14.392 01:18:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59687 00:05:14.392 01:18:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59687 ']' 00:05:14.392 01:18:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59687 00:05:14.392 01:18:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:14.392 01:18:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.392 01:18:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59687 00:05:14.392 killing process with pid 59687 00:05:14.392 01:18:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.392 01:18:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.392 01:18:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59687' 00:05:14.392 01:18:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59687 00:05:14.392 01:18:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59687 00:05:15.790 01:18:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59705 ]] 00:05:15.790 01:18:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59705 00:05:15.790 01:18:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59705 ']' 00:05:15.790 01:18:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59705 00:05:15.790 01:18:11 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:15.790 01:18:11 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.790 01:18:11 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59705 00:05:15.790 killing process with pid 59705 00:05:15.790 01:18:11 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:15.790 01:18:11 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:15.790 01:18:11 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59705' 00:05:15.790 01:18:11 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59705 00:05:15.790 01:18:11 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59705 00:05:16.726 01:18:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:16.726 Process with pid 59687 is not found 00:05:16.726 Process with pid 59705 is not found 00:05:16.726 01:18:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:16.726 01:18:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59687 ]] 00:05:16.726 01:18:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59687 00:05:16.726 01:18:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59687 ']' 00:05:16.726 01:18:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59687 00:05:16.726 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59687) - No such process 00:05:16.726 01:18:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59687 is not found' 00:05:16.726 01:18:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59705 ]] 00:05:16.727 01:18:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59705 00:05:16.727 01:18:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59705 ']' 00:05:16.727 01:18:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59705 00:05:16.727 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59705) - No such process 00:05:16.727 01:18:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59705 is not found' 00:05:16.727 01:18:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:16.727 ************************************ 00:05:16.727 END TEST cpu_locks 00:05:16.727 ************************************ 00:05:16.727 00:05:16.727 real 0m29.984s 00:05:16.727 user 0m50.707s 00:05:16.727 sys 0m4.585s 00:05:16.727 01:18:12 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.727 01:18:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.985 ************************************ 00:05:16.985 END TEST event 00:05:16.985 ************************************ 00:05:16.985 00:05:16.985 real 0m58.414s 00:05:16.985 user 1m47.404s 00:05:16.985 sys 0m7.424s 00:05:16.985 01:18:12 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.985 01:18:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.986 01:18:12 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:16.986 01:18:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.986 01:18:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.986 01:18:12 -- common/autotest_common.sh@10 -- # set +x 00:05:16.986 ************************************ 00:05:16.986 START TEST thread 00:05:16.986 ************************************ 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:16.986 * Looking for test storage... 00:05:16.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.986 01:18:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.986 01:18:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.986 01:18:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.986 01:18:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.986 01:18:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.986 01:18:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.986 01:18:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.986 01:18:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.986 01:18:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.986 01:18:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.986 01:18:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.986 01:18:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:16.986 01:18:12 thread -- scripts/common.sh@345 -- # : 1 00:05:16.986 01:18:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.986 01:18:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.986 01:18:12 thread -- scripts/common.sh@365 -- # decimal 1 00:05:16.986 01:18:12 thread -- scripts/common.sh@353 -- # local d=1 00:05:16.986 01:18:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.986 01:18:12 thread -- scripts/common.sh@355 -- # echo 1 00:05:16.986 01:18:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.986 01:18:12 thread -- scripts/common.sh@366 -- # decimal 2 00:05:16.986 01:18:12 thread -- scripts/common.sh@353 -- # local d=2 00:05:16.986 01:18:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.986 01:18:12 thread -- scripts/common.sh@355 -- # echo 2 00:05:16.986 01:18:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.986 01:18:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.986 01:18:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.986 01:18:12 thread -- scripts/common.sh@368 -- # return 0 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.986 --rc genhtml_branch_coverage=1 00:05:16.986 --rc genhtml_function_coverage=1 00:05:16.986 --rc genhtml_legend=1 00:05:16.986 --rc geninfo_all_blocks=1 00:05:16.986 --rc geninfo_unexecuted_blocks=1 00:05:16.986 00:05:16.986 ' 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.986 --rc genhtml_branch_coverage=1 00:05:16.986 --rc genhtml_function_coverage=1 00:05:16.986 --rc genhtml_legend=1 00:05:16.986 --rc geninfo_all_blocks=1 00:05:16.986 --rc geninfo_unexecuted_blocks=1 00:05:16.986 00:05:16.986 ' 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.986 --rc genhtml_branch_coverage=1 00:05:16.986 --rc genhtml_function_coverage=1 00:05:16.986 --rc genhtml_legend=1 00:05:16.986 --rc geninfo_all_blocks=1 00:05:16.986 --rc geninfo_unexecuted_blocks=1 00:05:16.986 00:05:16.986 ' 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.986 --rc genhtml_branch_coverage=1 00:05:16.986 --rc genhtml_function_coverage=1 00:05:16.986 --rc genhtml_legend=1 00:05:16.986 --rc geninfo_all_blocks=1 00:05:16.986 --rc geninfo_unexecuted_blocks=1 00:05:16.986 00:05:16.986 ' 00:05:16.986 01:18:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.986 01:18:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.986 ************************************ 00:05:16.986 START TEST thread_poller_perf 00:05:16.986 ************************************ 00:05:16.986 01:18:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:16.986 [2024-09-28 01:18:12.902386] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:16.986 [2024-09-28 01:18:12.902561] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59859 ] 00:05:17.244 [2024-09-28 01:18:13.045450] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.503 [2024-09-28 01:18:13.184956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.503 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:18.880 ====================================== 00:05:18.880 busy:2612504412 (cyc) 00:05:18.880 total_run_count: 407000 00:05:18.880 tsc_hz: 2600000000 (cyc) 00:05:18.880 ====================================== 00:05:18.880 poller_cost: 6418 (cyc), 2468 (nsec) 00:05:18.880 00:05:18.880 ************************************ 00:05:18.880 END TEST thread_poller_perf 00:05:18.880 ************************************ 00:05:18.880 real 0m1.520s 00:05:18.880 user 0m1.341s 00:05:18.880 sys 0m0.073s 00:05:18.880 01:18:14 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.880 01:18:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.880 01:18:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:18.880 01:18:14 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:18.880 01:18:14 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.880 01:18:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.880 ************************************ 00:05:18.880 START TEST thread_poller_perf 00:05:18.880 ************************************ 00:05:18.880 01:18:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:18.880 [2024-09-28 01:18:14.468011] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:18.880 [2024-09-28 01:18:14.468259] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59896 ] 00:05:18.880 [2024-09-28 01:18:14.609381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.880 [2024-09-28 01:18:14.792269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.880 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:20.260 ====================================== 00:05:20.260 busy:2603614790 (cyc) 00:05:20.260 total_run_count: 3974000 00:05:20.260 tsc_hz: 2600000000 (cyc) 00:05:20.260 ====================================== 00:05:20.260 poller_cost: 655 (cyc), 251 (nsec) 00:05:20.260 00:05:20.260 real 0m1.620s 00:05:20.260 user 0m1.430s 00:05:20.260 sys 0m0.082s 00:05:20.260 01:18:16 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.260 01:18:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.260 ************************************ 00:05:20.260 END TEST thread_poller_perf 00:05:20.260 ************************************ 00:05:20.260 01:18:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:20.260 ************************************ 00:05:20.260 END TEST thread 00:05:20.260 ************************************ 00:05:20.260 00:05:20.260 real 0m3.373s 00:05:20.260 user 0m2.898s 00:05:20.260 sys 0m0.255s 00:05:20.260 01:18:16 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.260 01:18:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.260 01:18:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:20.260 01:18:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:20.260 01:18:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.260 01:18:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.260 01:18:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.260 ************************************ 00:05:20.260 START TEST app_cmdline 00:05:20.260 ************************************ 00:05:20.260 01:18:16 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:20.521 * Looking for test storage... 00:05:20.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.521 01:18:16 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:20.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.521 --rc genhtml_branch_coverage=1 00:05:20.521 --rc genhtml_function_coverage=1 00:05:20.521 --rc genhtml_legend=1 00:05:20.521 --rc geninfo_all_blocks=1 00:05:20.521 --rc geninfo_unexecuted_blocks=1 00:05:20.521 00:05:20.521 ' 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:20.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.521 --rc genhtml_branch_coverage=1 00:05:20.521 --rc genhtml_function_coverage=1 00:05:20.521 --rc genhtml_legend=1 00:05:20.521 --rc geninfo_all_blocks=1 00:05:20.521 --rc geninfo_unexecuted_blocks=1 00:05:20.521 00:05:20.521 ' 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:20.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.521 --rc genhtml_branch_coverage=1 00:05:20.521 --rc genhtml_function_coverage=1 00:05:20.521 --rc genhtml_legend=1 00:05:20.521 --rc geninfo_all_blocks=1 00:05:20.521 --rc geninfo_unexecuted_blocks=1 00:05:20.521 00:05:20.521 ' 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:20.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.521 --rc genhtml_branch_coverage=1 00:05:20.521 --rc genhtml_function_coverage=1 00:05:20.521 --rc genhtml_legend=1 00:05:20.521 --rc geninfo_all_blocks=1 00:05:20.521 --rc geninfo_unexecuted_blocks=1 00:05:20.521 00:05:20.521 ' 00:05:20.521 01:18:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:20.521 01:18:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59985 00:05:20.521 01:18:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59985 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59985 ']' 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.521 01:18:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:20.521 01:18:16 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:20.521 [2024-09-28 01:18:16.360471] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:20.521 [2024-09-28 01:18:16.360590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:05:20.781 [2024-09-28 01:18:16.511903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.781 [2024-09-28 01:18:16.691055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.352 01:18:17 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.352 01:18:17 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:21.352 01:18:17 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:21.610 { 00:05:21.610 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:05:21.610 "fields": { 00:05:21.610 "major": 25, 00:05:21.610 "minor": 1, 00:05:21.610 "patch": 0, 00:05:21.610 "suffix": "-pre", 00:05:21.610 "commit": "09cc66129" 00:05:21.610 } 00:05:21.610 } 00:05:21.610 01:18:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:21.610 01:18:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:21.610 01:18:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:21.610 01:18:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:21.610 01:18:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:21.610 01:18:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:21.610 01:18:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.610 01:18:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:21.610 01:18:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:21.610 01:18:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:21.610 01:18:17 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:21.871 request: 00:05:21.871 { 00:05:21.871 "method": "env_dpdk_get_mem_stats", 00:05:21.871 "req_id": 1 00:05:21.871 } 00:05:21.871 Got JSON-RPC error response 00:05:21.871 response: 00:05:21.871 { 00:05:21.871 "code": -32601, 00:05:21.871 "message": "Method not found" 00:05:21.871 } 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:21.871 01:18:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59985 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59985 ']' 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59985 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59985 00:05:21.871 killing process with pid 59985 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59985' 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@969 -- # kill 59985 00:05:21.871 01:18:17 app_cmdline -- common/autotest_common.sh@974 -- # wait 59985 00:05:23.281 00:05:23.281 real 0m3.031s 00:05:23.281 user 0m3.258s 00:05:23.281 sys 0m0.411s 00:05:23.281 01:18:19 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.281 ************************************ 00:05:23.281 01:18:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.281 END TEST app_cmdline 00:05:23.281 ************************************ 00:05:23.540 01:18:19 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:23.540 01:18:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.540 01:18:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.540 01:18:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.540 ************************************ 00:05:23.540 START TEST version 00:05:23.540 ************************************ 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:23.540 * Looking for test storage... 00:05:23.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.540 01:18:19 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.540 01:18:19 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.540 01:18:19 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.540 01:18:19 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.540 01:18:19 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.540 01:18:19 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.540 01:18:19 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.540 01:18:19 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.540 01:18:19 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.540 01:18:19 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.540 01:18:19 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.540 01:18:19 version -- scripts/common.sh@344 -- # case "$op" in 00:05:23.540 01:18:19 version -- scripts/common.sh@345 -- # : 1 00:05:23.540 01:18:19 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.540 01:18:19 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.540 01:18:19 version -- scripts/common.sh@365 -- # decimal 1 00:05:23.540 01:18:19 version -- scripts/common.sh@353 -- # local d=1 00:05:23.540 01:18:19 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.540 01:18:19 version -- scripts/common.sh@355 -- # echo 1 00:05:23.540 01:18:19 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.540 01:18:19 version -- scripts/common.sh@366 -- # decimal 2 00:05:23.540 01:18:19 version -- scripts/common.sh@353 -- # local d=2 00:05:23.540 01:18:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.540 01:18:19 version -- scripts/common.sh@355 -- # echo 2 00:05:23.540 01:18:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.540 01:18:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.540 01:18:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.540 01:18:19 version -- scripts/common.sh@368 -- # return 0 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.540 --rc genhtml_branch_coverage=1 00:05:23.540 --rc genhtml_function_coverage=1 00:05:23.540 --rc genhtml_legend=1 00:05:23.540 --rc geninfo_all_blocks=1 00:05:23.540 --rc geninfo_unexecuted_blocks=1 00:05:23.540 00:05:23.540 ' 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.540 --rc genhtml_branch_coverage=1 00:05:23.540 --rc genhtml_function_coverage=1 00:05:23.540 --rc genhtml_legend=1 00:05:23.540 --rc geninfo_all_blocks=1 00:05:23.540 --rc geninfo_unexecuted_blocks=1 00:05:23.540 00:05:23.540 ' 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.540 --rc genhtml_branch_coverage=1 00:05:23.540 --rc genhtml_function_coverage=1 00:05:23.540 --rc genhtml_legend=1 00:05:23.540 --rc geninfo_all_blocks=1 00:05:23.540 --rc geninfo_unexecuted_blocks=1 00:05:23.540 00:05:23.540 ' 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.540 --rc genhtml_branch_coverage=1 00:05:23.540 --rc genhtml_function_coverage=1 00:05:23.540 --rc genhtml_legend=1 00:05:23.540 --rc geninfo_all_blocks=1 00:05:23.540 --rc geninfo_unexecuted_blocks=1 00:05:23.540 00:05:23.540 ' 00:05:23.540 01:18:19 version -- app/version.sh@17 -- # get_header_version major 00:05:23.540 01:18:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:23.540 01:18:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.540 01:18:19 version -- app/version.sh@14 -- # cut -f2 00:05:23.540 01:18:19 version -- app/version.sh@17 -- # major=25 00:05:23.540 01:18:19 version -- app/version.sh@18 -- # get_header_version minor 00:05:23.540 01:18:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:23.540 01:18:19 version -- app/version.sh@14 -- # cut -f2 00:05:23.540 01:18:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.540 01:18:19 version -- app/version.sh@18 -- # minor=1 00:05:23.540 01:18:19 version -- app/version.sh@19 -- # get_header_version patch 00:05:23.540 01:18:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:23.540 01:18:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.540 01:18:19 version -- app/version.sh@14 -- # cut -f2 00:05:23.540 01:18:19 version -- app/version.sh@19 -- # patch=0 00:05:23.540 01:18:19 version -- app/version.sh@20 -- # get_header_version suffix 00:05:23.540 01:18:19 version -- app/version.sh@14 -- # cut -f2 00:05:23.540 01:18:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:23.540 01:18:19 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.540 01:18:19 version -- app/version.sh@20 -- # suffix=-pre 00:05:23.540 01:18:19 version -- app/version.sh@22 -- # version=25.1 00:05:23.540 01:18:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:23.540 01:18:19 version -- app/version.sh@28 -- # version=25.1rc0 00:05:23.540 01:18:19 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:23.540 01:18:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:23.540 01:18:19 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:23.540 01:18:19 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:23.540 ************************************ 00:05:23.540 END TEST version 00:05:23.540 ************************************ 00:05:23.540 00:05:23.540 real 0m0.195s 00:05:23.540 user 0m0.115s 00:05:23.540 sys 0m0.102s 00:05:23.540 01:18:19 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.540 01:18:19 version -- common/autotest_common.sh@10 -- # set +x 00:05:23.540 01:18:19 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:23.540 01:18:19 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:23.540 01:18:19 -- spdk/autotest.sh@194 -- # uname -s 00:05:23.540 01:18:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:23.540 01:18:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.540 01:18:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.540 01:18:19 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:23.540 01:18:19 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:23.540 01:18:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:23.540 01:18:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.540 01:18:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.541 ************************************ 00:05:23.541 START TEST blockdev_nvme 00:05:23.541 ************************************ 00:05:23.541 01:18:19 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:23.802 * Looking for test storage... 00:05:23.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:23.802 01:18:19 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.802 01:18:19 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.802 01:18:19 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.802 01:18:19 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.802 01:18:19 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:23.803 01:18:19 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:23.803 01:18:19 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.803 01:18:19 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:23.803 01:18:19 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.803 01:18:19 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.803 01:18:19 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.803 01:18:19 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.803 --rc genhtml_branch_coverage=1 00:05:23.803 --rc genhtml_function_coverage=1 00:05:23.803 --rc genhtml_legend=1 00:05:23.803 --rc geninfo_all_blocks=1 00:05:23.803 --rc geninfo_unexecuted_blocks=1 00:05:23.803 00:05:23.803 ' 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.803 --rc genhtml_branch_coverage=1 00:05:23.803 --rc genhtml_function_coverage=1 00:05:23.803 --rc genhtml_legend=1 00:05:23.803 --rc geninfo_all_blocks=1 00:05:23.803 --rc geninfo_unexecuted_blocks=1 00:05:23.803 00:05:23.803 ' 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.803 --rc genhtml_branch_coverage=1 00:05:23.803 --rc genhtml_function_coverage=1 00:05:23.803 --rc genhtml_legend=1 00:05:23.803 --rc geninfo_all_blocks=1 00:05:23.803 --rc geninfo_unexecuted_blocks=1 00:05:23.803 00:05:23.803 ' 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.803 --rc genhtml_branch_coverage=1 00:05:23.803 --rc genhtml_function_coverage=1 00:05:23.803 --rc genhtml_legend=1 00:05:23.803 --rc geninfo_all_blocks=1 00:05:23.803 --rc geninfo_unexecuted_blocks=1 00:05:23.803 00:05:23.803 ' 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:23.803 01:18:19 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:23.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60157 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60157 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60157 ']' 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.803 01:18:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:23.803 01:18:19 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:23.803 [2024-09-28 01:18:19.725382] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:23.803 [2024-09-28 01:18:19.725543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60157 ] 00:05:24.064 [2024-09-28 01:18:19.885243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.325 [2024-09-28 01:18:20.065186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.896 01:18:20 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.896 01:18:20 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:05:24.896 01:18:20 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:24.896 01:18:20 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:24.896 01:18:20 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:24.896 01:18:20 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:24.896 01:18:20 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.896 01:18:20 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:24.896 01:18:20 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.896 01:18:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:25.157 01:18:20 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.157 01:18:20 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:25.157 01:18:20 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.157 01:18:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:25.157 01:18:20 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.157 01:18:20 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:25.157 01:18:20 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:25.157 01:18:20 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.157 01:18:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:25.157 01:18:21 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.157 01:18:21 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:25.157 01:18:21 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.157 01:18:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:25.157 01:18:21 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.157 01:18:21 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:25.157 01:18:21 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.157 01:18:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:25.157 01:18:21 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.157 01:18:21 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:25.157 01:18:21 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:25.157 01:18:21 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.157 01:18:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:25.157 01:18:21 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.418 01:18:21 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:25.418 01:18:21 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:25.418 01:18:21 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "94f5ad9b-7db9-4008-bf88-aa8666318902"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "94f5ad9b-7db9-4008-bf88-aa8666318902",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b6886e43-2097-4f42-bacb-1d37370caa32"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b6886e43-2097-4f42-bacb-1d37370caa32",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5f2b21ce-2d1a-4a28-85cc-045be21de1b2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5f2b21ce-2d1a-4a28-85cc-045be21de1b2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "36d65f29-786a-4c25-8a87-236dcd468a92"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "36d65f29-786a-4c25-8a87-236dcd468a92",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "51340480-f2a7-4fae-97fb-bf70f94394b0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "51340480-f2a7-4fae-97fb-bf70f94394b0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "39f25b2d-6fc9-4bce-9f30-03192c47daf7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "39f25b2d-6fc9-4bce-9f30-03192c47daf7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:25.418 01:18:21 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:25.418 01:18:21 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:25.418 01:18:21 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:25.418 01:18:21 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60157 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60157 ']' 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60157 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60157 00:05:25.418 killing process with pid 60157 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60157' 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60157 00:05:25.418 01:18:21 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60157 00:05:26.804 01:18:22 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:26.804 01:18:22 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:26.804 01:18:22 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:05:26.804 01:18:22 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.804 01:18:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:26.804 ************************************ 00:05:26.804 START TEST bdev_hello_world 00:05:26.804 ************************************ 00:05:26.804 01:18:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:26.804 [2024-09-28 01:18:22.730286] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:26.804 [2024-09-28 01:18:22.730545] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60241 ] 00:05:27.065 [2024-09-28 01:18:22.877232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.326 [2024-09-28 01:18:23.055237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.897 [2024-09-28 01:18:23.591083] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:27.897 [2024-09-28 01:18:23.591127] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:27.897 [2024-09-28 01:18:23.591147] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:27.897 [2024-09-28 01:18:23.593623] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:27.897 [2024-09-28 01:18:23.594445] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:27.897 [2024-09-28 01:18:23.594472] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:27.897 [2024-09-28 01:18:23.594944] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:27.897 00:05:27.897 [2024-09-28 01:18:23.594967] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:28.465 00:05:28.465 real 0m1.615s 00:05:28.465 user 0m1.337s 00:05:28.465 sys 0m0.170s 00:05:28.465 ************************************ 00:05:28.465 END TEST bdev_hello_world 00:05:28.465 ************************************ 00:05:28.465 01:18:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.465 01:18:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:28.465 01:18:24 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:28.465 01:18:24 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:28.465 01:18:24 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.465 01:18:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:28.465 ************************************ 00:05:28.465 START TEST bdev_bounds 00:05:28.465 ************************************ 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:05:28.465 Process bdevio pid: 60283 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60283 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60283' 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60283 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 60283 ']' 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.465 01:18:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:28.465 [2024-09-28 01:18:24.393310] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:28.465 [2024-09-28 01:18:24.393846] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60283 ] 00:05:28.723 [2024-09-28 01:18:24.543380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.980 [2024-09-28 01:18:24.686720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.980 [2024-09-28 01:18:24.687017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.980 [2024-09-28 01:18:24.687040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.546 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.546 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:05:29.546 01:18:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:29.546 I/O targets: 00:05:29.546 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:29.546 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:29.546 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:29.546 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:29.546 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:29.546 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:29.546 00:05:29.546 00:05:29.546 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.546 http://cunit.sourceforge.net/ 00:05:29.546 00:05:29.546 00:05:29.546 Suite: bdevio tests on: Nvme3n1 00:05:29.546 Test: blockdev write read block ...passed 00:05:29.546 Test: blockdev write zeroes read block ...passed 00:05:29.546 Test: blockdev write zeroes read no split ...passed 00:05:29.546 Test: blockdev write zeroes read split ...passed 00:05:29.546 Test: blockdev write zeroes read split partial ...passed 00:05:29.546 Test: blockdev reset ...[2024-09-28 01:18:25.352290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:05:29.546 [2024-09-28 01:18:25.355102] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:29.546 passed 00:05:29.546 Test: blockdev write read 8 blocks ...passed 00:05:29.546 Test: blockdev write read size > 128k ...passed 00:05:29.546 Test: blockdev write read invalid size ...passed 00:05:29.546 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:29.546 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:29.546 Test: blockdev write read max offset ...passed 00:05:29.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:29.546 Test: blockdev writev readv 8 blocks ...passed 00:05:29.546 Test: blockdev writev readv 30 x 1block ...passed 00:05:29.546 Test: blockdev writev readv block ...passed 00:05:29.546 Test: blockdev writev readv size > 128k ...passed 00:05:29.546 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:29.546 Test: blockdev comparev and writev ...[2024-09-28 01:18:25.363255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4c0a000 len:0x1000 00:05:29.546 [2024-09-28 01:18:25.363308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:29.546 passed 00:05:29.546 Test: blockdev nvme passthru rw ...passed 00:05:29.546 Test: blockdev nvme passthru vendor specific ...passed 00:05:29.546 Test: blockdev nvme admin passthru ...[2024-09-28 01:18:25.364307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:29.546 [2024-09-28 01:18:25.364344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:29.546 passed 00:05:29.546 Test: blockdev copy ...passed 00:05:29.546 Suite: bdevio tests on: Nvme2n3 00:05:29.546 Test: blockdev write read block ...passed 00:05:29.546 Test: blockdev write zeroes read block ...passed 00:05:29.546 Test: blockdev write zeroes read no split ...passed 00:05:29.546 Test: blockdev write zeroes read split ...passed 00:05:29.546 Test: blockdev write zeroes read split partial ...passed 00:05:29.546 Test: blockdev reset ...[2024-09-28 01:18:25.418524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:29.546 [2024-09-28 01:18:25.421571] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:29.546 passed 00:05:29.546 Test: blockdev write read 8 blocks ...passed 00:05:29.546 Test: blockdev write read size > 128k ...passed 00:05:29.546 Test: blockdev write read invalid size ...passed 00:05:29.546 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:29.546 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:29.546 Test: blockdev write read max offset ...passed 00:05:29.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:29.546 Test: blockdev writev readv 8 blocks ...passed 00:05:29.546 Test: blockdev writev readv 30 x 1block ...passed 00:05:29.546 Test: blockdev writev readv block ...passed 00:05:29.546 Test: blockdev writev readv size > 128k ...passed 00:05:29.546 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:29.546 Test: blockdev comparev and writev ...[2024-09-28 01:18:25.428126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aaa04000 len:0x1000 00:05:29.546 [2024-09-28 01:18:25.428171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:29.546 passed 00:05:29.546 Test: blockdev nvme passthru rw ...passed 00:05:29.546 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:18:25.428771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:29.546 [2024-09-28 01:18:25.428795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:29.546 passed 00:05:29.546 Test: blockdev nvme admin passthru ...passed 00:05:29.546 Test: blockdev copy ...passed 00:05:29.546 Suite: bdevio tests on: Nvme2n2 00:05:29.546 Test: blockdev write read block ...passed 00:05:29.546 Test: blockdev write zeroes read block ...passed 00:05:29.546 Test: blockdev write zeroes read no split ...passed 00:05:29.546 Test: blockdev write zeroes read split ...passed 00:05:29.804 Test: blockdev write zeroes read split partial ...passed 00:05:29.804 Test: blockdev reset ...[2024-09-28 01:18:25.486703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:29.804 [2024-09-28 01:18:25.489480] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:29.804 passed 00:05:29.804 Test: blockdev write read 8 blocks ...passed 00:05:29.804 Test: blockdev write read size > 128k ...passed 00:05:29.804 Test: blockdev write read invalid size ...passed 00:05:29.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:29.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:29.804 Test: blockdev write read max offset ...passed 00:05:29.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:29.804 Test: blockdev writev readv 8 blocks ...passed 00:05:29.804 Test: blockdev writev readv 30 x 1block ...passed 00:05:29.804 Test: blockdev writev readv block ...passed 00:05:29.804 Test: blockdev writev readv size > 128k ...passed 00:05:29.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:29.804 Test: blockdev comparev and writev ...[2024-09-28 01:18:25.496311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c263a000 len:0x1000 00:05:29.804 [2024-09-28 01:18:25.496444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:29.804 passed 00:05:29.804 Test: blockdev nvme passthru rw ...passed 00:05:29.804 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:18:25.497096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:29.804 [2024-09-28 01:18:25.497185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:05:29.804 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:05:29.804 passed 00:05:29.804 Test: blockdev copy ...passed 00:05:29.804 Suite: bdevio tests on: Nvme2n1 00:05:29.804 Test: blockdev write read block ...passed 00:05:29.804 Test: blockdev write zeroes read block ...passed 00:05:29.804 Test: blockdev write zeroes read no split ...passed 00:05:29.804 Test: blockdev write zeroes read split ...passed 00:05:29.804 Test: blockdev write zeroes read split partial ...passed 00:05:29.804 Test: blockdev reset ...[2024-09-28 01:18:25.555064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:29.804 [2024-09-28 01:18:25.557756] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:29.804 passed 00:05:29.804 Test: blockdev write read 8 blocks ...passed 00:05:29.804 Test: blockdev write read size > 128k ...passed 00:05:29.804 Test: blockdev write read invalid size ...passed 00:05:29.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:29.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:29.804 Test: blockdev write read max offset ...passed 00:05:29.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:29.804 Test: blockdev writev readv 8 blocks ...passed 00:05:29.804 Test: blockdev writev readv 30 x 1block ...passed 00:05:29.804 Test: blockdev writev readv block ...passed 00:05:29.804 Test: blockdev writev readv size > 128k ...passed 00:05:29.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:29.804 Test: blockdev comparev and writev ...[2024-09-28 01:18:25.564147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2634000 len:0x1000 00:05:29.804 [2024-09-28 01:18:25.564183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:29.804 passed 00:05:29.804 Test: blockdev nvme passthru rw ...passed 00:05:29.804 Test: blockdev nvme passthru vendor specific ...passed 00:05:29.804 Test: blockdev nvme admin passthru ...[2024-09-28 01:18:25.564788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:29.804 [2024-09-28 01:18:25.564815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:29.804 passed 00:05:29.804 Test: blockdev copy ...passed 00:05:29.804 Suite: bdevio tests on: Nvme1n1 00:05:29.804 Test: blockdev write read block ...passed 00:05:29.804 Test: blockdev write zeroes read block ...passed 00:05:29.804 Test: blockdev write zeroes read no split ...passed 00:05:29.804 Test: blockdev write zeroes read split ...passed 00:05:29.804 Test: blockdev write zeroes read split partial ...passed 00:05:29.804 Test: blockdev reset ...[2024-09-28 01:18:25.606393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:05:29.804 [2024-09-28 01:18:25.608846] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:29.804 passed 00:05:29.804 Test: blockdev write read 8 blocks ...passed 00:05:29.804 Test: blockdev write read size > 128k ...passed 00:05:29.804 Test: blockdev write read invalid size ...passed 00:05:29.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:29.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:29.804 Test: blockdev write read max offset ...passed 00:05:29.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:29.804 Test: blockdev writev readv 8 blocks ...passed 00:05:29.804 Test: blockdev writev readv 30 x 1block ...passed 00:05:29.804 Test: blockdev writev readv block ...passed 00:05:29.804 Test: blockdev writev readv size > 128k ...passed 00:05:29.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:29.804 Test: blockdev comparev and writev ...[2024-09-28 01:18:25.615647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2630000 len:0x1000 00:05:29.804 [2024-09-28 01:18:25.615761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:29.805 passed 00:05:29.805 Test: blockdev nvme passthru rw ...passed 00:05:29.805 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:18:25.616469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:29.805 [2024-09-28 01:18:25.616580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:05:29.805 00:05:29.805 Test: blockdev nvme admin passthru ...passed 00:05:29.805 Test: blockdev copy ...passed 00:05:29.805 Suite: bdevio tests on: Nvme0n1 00:05:29.805 Test: blockdev write read block ...passed 00:05:29.805 Test: blockdev write zeroes read block ...passed 00:05:29.805 Test: blockdev write zeroes read no split ...passed 00:05:29.805 Test: blockdev write zeroes read split ...passed 00:05:29.805 Test: blockdev write zeroes read split partial ...passed 00:05:29.805 Test: blockdev reset ...[2024-09-28 01:18:25.673593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:05:29.805 [2024-09-28 01:18:25.676118] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:29.805 passed 00:05:29.805 Test: blockdev write read 8 blocks ...passed 00:05:29.805 Test: blockdev write read size > 128k ...passed 00:05:29.805 Test: blockdev write read invalid size ...passed 00:05:29.805 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:29.805 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:29.805 Test: blockdev write read max offset ...passed 00:05:29.805 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:29.805 Test: blockdev writev readv 8 blocks ...passed 00:05:29.805 Test: blockdev writev readv 30 x 1block ...passed 00:05:29.805 Test: blockdev writev readv block ...passed 00:05:29.805 Test: blockdev writev readv size > 128k ...passed 00:05:29.805 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:29.805 Test: blockdev comparev and writev ...[2024-09-28 01:18:25.681768] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:29.805 separate metadata which is not supported yet. 00:05:29.805 passed 00:05:29.805 Test: blockdev nvme passthru rw ...passed 00:05:29.805 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:18:25.682207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:29.805 [2024-09-28 01:18:25.682238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:29.805 passed 00:05:29.805 Test: blockdev nvme admin passthru ...passed 00:05:29.805 Test: blockdev copy ...passed 00:05:29.805 00:05:29.805 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.805 suites 6 6 n/a 0 0 00:05:29.805 tests 138 138 138 0 0 00:05:29.805 asserts 893 893 893 0 n/a 00:05:29.805 00:05:29.805 Elapsed time = 0.990 seconds 00:05:29.805 0 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60283 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 60283 ']' 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 60283 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60283 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60283' 00:05:29.805 killing process with pid 60283 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 60283 00:05:29.805 01:18:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 60283 00:05:30.372 01:18:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:30.372 00:05:30.372 real 0m1.932s 00:05:30.372 user 0m4.771s 00:05:30.372 sys 0m0.247s 00:05:30.372 01:18:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.372 01:18:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:30.372 ************************************ 00:05:30.372 END TEST bdev_bounds 00:05:30.372 ************************************ 00:05:30.652 01:18:26 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:30.652 01:18:26 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:30.652 01:18:26 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.652 01:18:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:30.652 ************************************ 00:05:30.652 START TEST bdev_nbd 00:05:30.652 ************************************ 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:30.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60337 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60337 /var/tmp/spdk-nbd.sock 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 60337 ']' 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.652 01:18:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:30.652 [2024-09-28 01:18:26.393529] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:30.652 [2024-09-28 01:18:26.393825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:30.652 [2024-09-28 01:18:26.548782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.914 [2024-09-28 01:18:26.731682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.485 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.485 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:05:31.485 01:18:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:31.486 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:31.746 1+0 records in 00:05:31.746 1+0 records out 00:05:31.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113666 s, 3.6 MB/s 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:31.746 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:32.007 1+0 records in 00:05:32.007 1+0 records out 00:05:32.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000950114 s, 4.3 MB/s 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:32.007 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:32.267 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:32.267 01:18:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:32.267 1+0 records in 00:05:32.267 1+0 records out 00:05:32.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127041 s, 3.2 MB/s 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:32.267 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:32.527 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:32.527 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:32.527 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:32.528 1+0 records in 00:05:32.528 1+0 records out 00:05:32.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139632 s, 2.9 MB/s 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:32.528 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:32.789 1+0 records in 00:05:32.789 1+0 records out 00:05:32.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904632 s, 4.5 MB/s 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.789 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.790 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:33.049 1+0 records in 00:05:33.049 1+0 records out 00:05:33.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00315815 s, 1.3 MB/s 00:05:33.049 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:33.049 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:33.049 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:33.049 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.049 01:18:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:33.049 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:33.049 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:33.049 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.049 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:33.049 { 00:05:33.049 "nbd_device": "/dev/nbd0", 00:05:33.050 "bdev_name": "Nvme0n1" 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd1", 00:05:33.050 "bdev_name": "Nvme1n1" 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd2", 00:05:33.050 "bdev_name": "Nvme2n1" 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd3", 00:05:33.050 "bdev_name": "Nvme2n2" 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd4", 00:05:33.050 "bdev_name": "Nvme2n3" 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd5", 00:05:33.050 "bdev_name": "Nvme3n1" 00:05:33.050 } 00:05:33.050 ]' 00:05:33.050 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:33.050 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd0", 00:05:33.050 "bdev_name": "Nvme0n1" 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd1", 00:05:33.050 "bdev_name": "Nvme1n1" 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd2", 00:05:33.050 "bdev_name": "Nvme2n1" 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd3", 00:05:33.050 "bdev_name": "Nvme2n2" 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd4", 00:05:33.050 "bdev_name": "Nvme2n3" 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "nbd_device": "/dev/nbd5", 00:05:33.050 "bdev_name": "Nvme3n1" 00:05:33.050 } 00:05:33.050 ]' 00:05:33.050 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:33.050 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:33.050 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.050 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:33.050 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.050 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:33.050 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.050 01:18:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.311 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.311 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.311 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.311 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.311 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.311 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.311 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:33.311 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.311 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.311 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.572 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.572 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.572 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.572 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.572 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.572 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.572 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:33.572 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.572 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.572 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:33.833 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:33.833 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:33.833 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:33.833 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.833 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.833 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:33.833 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:33.833 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.833 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.833 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:34.095 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:34.095 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:34.095 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:34.095 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.095 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.095 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:34.095 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:34.095 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.095 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.095 01:18:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:34.095 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:34.095 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:34.095 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:34.095 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.095 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.095 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:34.095 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:34.095 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.095 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.095 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.355 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:34.616 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:34.876 /dev/nbd0 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:34.876 1+0 records in 00:05:34.876 1+0 records out 00:05:34.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112182 s, 3.7 MB/s 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:34.876 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:35.137 /dev/nbd1 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:35.137 1+0 records in 00:05:35.137 1+0 records out 00:05:35.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762573 s, 5.4 MB/s 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:35.137 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:35.138 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.138 01:18:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:35.138 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.138 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:35.138 01:18:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:35.399 /dev/nbd10 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:35.399 1+0 records in 00:05:35.399 1+0 records out 00:05:35.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038683 s, 10.6 MB/s 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:35.399 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:35.660 /dev/nbd11 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:35.660 1+0 records in 00:05:35.660 1+0 records out 00:05:35.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107788 s, 3.8 MB/s 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:35.660 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:35.920 /dev/nbd12 00:05:35.920 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:35.920 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:35.920 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:05:35.920 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:35.920 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:35.920 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:35.920 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:05:35.920 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:35.920 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:35.920 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:35.921 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:35.921 1+0 records in 00:05:35.921 1+0 records out 00:05:35.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124014 s, 3.3 MB/s 00:05:35.921 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:35.921 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:35.921 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:35.921 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:35.921 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:35.921 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.921 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:35.921 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:35.921 /dev/nbd13 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:36.181 1+0 records in 00:05:36.181 1+0 records out 00:05:36.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100416 s, 4.1 MB/s 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.181 01:18:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.181 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd0", 00:05:36.181 "bdev_name": "Nvme0n1" 00:05:36.181 }, 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd1", 00:05:36.181 "bdev_name": "Nvme1n1" 00:05:36.181 }, 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd10", 00:05:36.181 "bdev_name": "Nvme2n1" 00:05:36.181 }, 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd11", 00:05:36.181 "bdev_name": "Nvme2n2" 00:05:36.181 }, 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd12", 00:05:36.181 "bdev_name": "Nvme2n3" 00:05:36.181 }, 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd13", 00:05:36.181 "bdev_name": "Nvme3n1" 00:05:36.181 } 00:05:36.181 ]' 00:05:36.181 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd0", 00:05:36.181 "bdev_name": "Nvme0n1" 00:05:36.181 }, 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd1", 00:05:36.181 "bdev_name": "Nvme1n1" 00:05:36.181 }, 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd10", 00:05:36.181 "bdev_name": "Nvme2n1" 00:05:36.181 }, 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd11", 00:05:36.181 "bdev_name": "Nvme2n2" 00:05:36.181 }, 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd12", 00:05:36.181 "bdev_name": "Nvme2n3" 00:05:36.181 }, 00:05:36.181 { 00:05:36.181 "nbd_device": "/dev/nbd13", 00:05:36.181 "bdev_name": "Nvme3n1" 00:05:36.181 } 00:05:36.181 ]' 00:05:36.181 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.442 /dev/nbd1 00:05:36.442 /dev/nbd10 00:05:36.442 /dev/nbd11 00:05:36.442 /dev/nbd12 00:05:36.442 /dev/nbd13' 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.442 /dev/nbd1 00:05:36.442 /dev/nbd10 00:05:36.442 /dev/nbd11 00:05:36.442 /dev/nbd12 00:05:36.442 /dev/nbd13' 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:36.442 256+0 records in 00:05:36.442 256+0 records out 00:05:36.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472416 s, 222 MB/s 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.442 256+0 records in 00:05:36.442 256+0 records out 00:05:36.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186415 s, 5.6 MB/s 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.442 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.703 256+0 records in 00:05:36.703 256+0 records out 00:05:36.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176831 s, 5.9 MB/s 00:05:36.703 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.703 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:36.965 256+0 records in 00:05:36.965 256+0 records out 00:05:36.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230952 s, 4.5 MB/s 00:05:36.965 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.965 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:37.225 256+0 records in 00:05:37.225 256+0 records out 00:05:37.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233153 s, 4.5 MB/s 00:05:37.225 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.225 01:18:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:37.486 256+0 records in 00:05:37.486 256+0 records out 00:05:37.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22729 s, 4.6 MB/s 00:05:37.486 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.486 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:37.747 256+0 records in 00:05:37.747 256+0 records out 00:05:37.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233444 s, 4.5 MB/s 00:05:37.747 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:37.747 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:37.747 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.747 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.748 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.008 01:18:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:38.269 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:38.269 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:38.269 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:38.269 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.269 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.269 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:38.269 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:38.269 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.269 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.269 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:38.527 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:38.527 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:38.527 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:38.527 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.527 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.527 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:38.527 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:38.527 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.527 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.527 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:38.785 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:38.785 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:38.785 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:38.785 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.785 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.785 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:38.785 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:38.785 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.785 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.785 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:39.052 01:18:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:39.309 malloc_lvol_verify 00:05:39.309 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:39.568 58127b2e-73cd-4e7e-955d-6dba3507dbaf 00:05:39.568 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:39.826 a5737f48-8202-4cfe-a277-b735a3248fc3 00:05:39.826 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:40.090 /dev/nbd0 00:05:40.090 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:40.090 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:40.090 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:40.090 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:40.091 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:40.091 mke2fs 1.47.0 (5-Feb-2023) 00:05:40.091 Discarding device blocks: 0/4096 done 00:05:40.091 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:40.091 00:05:40.091 Allocating group tables: 0/1 done 00:05:40.091 Writing inode tables: 0/1 done 00:05:40.091 Creating journal (1024 blocks): done 00:05:40.091 Writing superblocks and filesystem accounting information: 0/1 done 00:05:40.091 00:05:40.091 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:40.091 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.091 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:40.091 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.091 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:40.091 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.091 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.091 01:18:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60337 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 60337 ']' 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 60337 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.091 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60337 00:05:40.351 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.351 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.351 killing process with pid 60337 00:05:40.351 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60337' 00:05:40.351 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 60337 00:05:40.351 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 60337 00:05:40.918 01:18:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:40.918 00:05:40.918 real 0m10.396s 00:05:40.918 user 0m14.201s 00:05:40.918 sys 0m3.348s 00:05:40.918 ************************************ 00:05:40.918 END TEST bdev_nbd 00:05:40.918 ************************************ 00:05:40.918 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.918 01:18:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:40.918 01:18:36 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:05:40.918 01:18:36 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:05:40.918 skipping fio tests on NVMe due to multi-ns failures. 00:05:40.918 01:18:36 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:40.918 01:18:36 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:40.918 01:18:36 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:40.918 01:18:36 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:05:40.918 01:18:36 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.918 01:18:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:40.918 ************************************ 00:05:40.918 START TEST bdev_verify 00:05:40.918 ************************************ 00:05:40.918 01:18:36 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:41.177 [2024-09-28 01:18:36.850585] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:41.177 [2024-09-28 01:18:36.850701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60715 ] 00:05:41.177 [2024-09-28 01:18:36.998606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.435 [2024-09-28 01:18:37.144482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.435 [2024-09-28 01:18:37.144582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.006 Running I/O for 5 seconds... 00:05:47.148 20096.00 IOPS, 78.50 MiB/s 20128.00 IOPS, 78.62 MiB/s 21269.33 IOPS, 83.08 MiB/s 20848.00 IOPS, 81.44 MiB/s 20928.00 IOPS, 81.75 MiB/s 00:05:47.148 Latency(us) 00:05:47.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:47.148 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:47.148 Verification LBA range: start 0x0 length 0xbd0bd 00:05:47.148 Nvme0n1 : 5.05 1699.77 6.64 0.00 0.00 75047.75 11947.72 81062.99 00:05:47.148 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:47.148 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:05:47.148 Nvme0n1 : 5.07 1766.60 6.90 0.00 0.00 72262.00 10536.17 79046.50 00:05:47.148 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:47.148 Verification LBA range: start 0x0 length 0xa0000 00:05:47.148 Nvme1n1 : 5.07 1704.05 6.66 0.00 0.00 74583.68 8822.15 75820.11 00:05:47.149 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:47.149 Verification LBA range: start 0xa0000 length 0xa0000 00:05:47.149 Nvme1n1 : 5.08 1765.04 6.89 0.00 0.00 72193.52 11796.48 69770.63 00:05:47.149 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:47.149 Verification LBA range: start 0x0 length 0x80000 00:05:47.149 Nvme2n1 : 5.07 1703.53 6.65 0.00 0.00 74449.96 6956.90 73803.62 00:05:47.149 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:47.149 Verification LBA range: start 0x80000 length 0x80000 00:05:47.149 Nvme2n1 : 5.08 1764.49 6.89 0.00 0.00 71981.35 13308.85 63721.16 00:05:47.149 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:47.149 Verification LBA range: start 0x0 length 0x80000 00:05:47.149 Nvme2n2 : 5.09 1710.44 6.68 0.00 0.00 74089.22 9931.22 70577.23 00:05:47.149 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:47.149 Verification LBA range: start 0x80000 length 0x80000 00:05:47.149 Nvme2n2 : 5.08 1763.32 6.89 0.00 0.00 71871.36 14216.27 66140.95 00:05:47.149 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:47.149 Verification LBA range: start 0x0 length 0x80000 00:05:47.149 Nvme2n3 : 5.09 1710.00 6.68 0.00 0.00 73953.79 10284.11 73803.62 00:05:47.149 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:47.149 Verification LBA range: start 0x80000 length 0x80000 00:05:47.149 Nvme2n3 : 5.08 1762.81 6.89 0.00 0.00 71744.57 13611.32 67754.14 00:05:47.149 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:47.149 Verification LBA range: start 0x0 length 0x20000 00:05:47.149 Nvme3n1 : 5.09 1708.98 6.68 0.00 0.00 73844.95 11090.71 78239.90 00:05:47.149 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:47.149 Verification LBA range: start 0x20000 length 0x20000 00:05:47.149 Nvme3n1 : 5.08 1762.24 6.88 0.00 0.00 71624.50 10737.82 70980.53 00:05:47.149 =================================================================================================================== 00:05:47.149 Total : 20821.28 81.33 0.00 0.00 73115.94 6956.90 81062.99 00:05:48.535 00:05:48.535 real 0m7.257s 00:05:48.535 user 0m13.461s 00:05:48.535 sys 0m0.204s 00:05:48.535 01:18:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.535 01:18:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:48.535 ************************************ 00:05:48.535 END TEST bdev_verify 00:05:48.535 ************************************ 00:05:48.535 01:18:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:48.535 01:18:44 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:05:48.535 01:18:44 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.535 01:18:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.535 ************************************ 00:05:48.535 START TEST bdev_verify_big_io 00:05:48.535 ************************************ 00:05:48.535 01:18:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:48.535 [2024-09-28 01:18:44.170919] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:48.535 [2024-09-28 01:18:44.171484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60813 ] 00:05:48.535 [2024-09-28 01:18:44.321841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.797 [2024-09-28 01:18:44.503228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.797 [2024-09-28 01:18:44.503266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.369 Running I/O for 5 seconds... 00:05:55.499 525.00 IOPS, 32.81 MiB/s 2249.00 IOPS, 140.56 MiB/s 2843.33 IOPS, 177.71 MiB/s 00:05:55.499 Latency(us) 00:05:55.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:55.499 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0x0 length 0xbd0b 00:05:55.499 Nvme0n1 : 5.54 130.25 8.14 0.00 0.00 942845.52 10536.17 1155046.79 00:05:55.499 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0xbd0b length 0xbd0b 00:05:55.499 Nvme0n1 : 5.69 113.81 7.11 0.00 0.00 1071819.65 21173.17 1142141.24 00:05:55.499 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0x0 length 0xa000 00:05:55.499 Nvme1n1 : 5.68 125.35 7.83 0.00 0.00 942048.00 75013.51 1690627.15 00:05:55.499 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0xa000 length 0xa000 00:05:55.499 Nvme1n1 : 5.70 116.39 7.27 0.00 0.00 1021029.74 89128.96 987274.63 00:05:55.499 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0x0 length 0x8000 00:05:55.499 Nvme2n1 : 5.79 128.45 8.03 0.00 0.00 887323.56 98808.12 1716438.25 00:05:55.499 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0x8000 length 0x8000 00:05:55.499 Nvme2n1 : 5.76 122.19 7.64 0.00 0.00 947701.83 62511.26 1019538.51 00:05:55.499 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0x0 length 0x8000 00:05:55.499 Nvme2n2 : 5.93 138.70 8.67 0.00 0.00 800287.24 49000.76 1742249.35 00:05:55.499 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0x8000 length 0x8000 00:05:55.499 Nvme2n2 : 5.84 127.39 7.96 0.00 0.00 879773.64 49807.36 1058255.16 00:05:55.499 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0x0 length 0x8000 00:05:55.499 Nvme2n3 : 5.97 146.08 9.13 0.00 0.00 736050.91 32868.82 1768060.46 00:05:55.499 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0x8000 length 0x8000 00:05:55.499 Nvme2n3 : 5.88 135.33 8.46 0.00 0.00 802371.79 39321.60 1129235.69 00:05:55.499 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0x0 length 0x2000 00:05:55.499 Nvme3n1 : 6.04 178.78 11.17 0.00 0.00 584357.24 341.86 1780966.01 00:05:55.499 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:55.499 Verification LBA range: start 0x2000 length 0x2000 00:05:55.499 Nvme3n1 : 6.00 166.02 10.38 0.00 0.00 635511.02 296.17 1096971.82 00:05:55.499 =================================================================================================================== 00:05:55.499 Total : 1628.74 101.80 0.00 0.00 832068.78 296.17 1780966.01 00:05:57.397 00:05:57.397 real 0m8.764s 00:05:57.397 user 0m16.002s 00:05:57.397 sys 0m0.241s 00:05:57.397 01:18:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.397 01:18:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:05:57.397 ************************************ 00:05:57.397 END TEST bdev_verify_big_io 00:05:57.397 ************************************ 00:05:57.397 01:18:52 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:57.397 01:18:52 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:05:57.397 01:18:52 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.397 01:18:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.397 ************************************ 00:05:57.397 START TEST bdev_write_zeroes 00:05:57.397 ************************************ 00:05:57.397 01:18:52 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:57.397 [2024-09-28 01:18:52.977966] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:57.397 [2024-09-28 01:18:52.978493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60922 ] 00:05:57.397 [2024-09-28 01:18:53.124892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.397 [2024-09-28 01:18:53.299468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.961 Running I/O for 1 seconds... 00:05:59.333 76800.00 IOPS, 300.00 MiB/s 00:05:59.333 Latency(us) 00:05:59.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:59.333 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:59.333 Nvme0n1 : 1.02 12759.95 49.84 0.00 0.00 10012.91 7158.55 19559.98 00:05:59.333 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:59.333 Nvme1n1 : 1.02 12745.35 49.79 0.00 0.00 10013.29 7208.96 19156.68 00:05:59.333 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:59.333 Nvme2n1 : 1.02 12730.85 49.73 0.00 0.00 9993.15 7259.37 18551.73 00:05:59.333 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:59.333 Nvme2n2 : 1.02 12716.51 49.67 0.00 0.00 9971.43 7259.37 18047.61 00:05:59.333 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:59.333 Nvme2n3 : 1.02 12702.17 49.62 0.00 0.00 9954.19 6906.49 18148.43 00:05:59.333 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:59.333 Nvme3n1 : 1.02 12687.87 49.56 0.00 0.00 9937.69 5116.85 19660.80 00:05:59.333 =================================================================================================================== 00:05:59.333 Total : 76342.71 298.21 0.00 0.00 9980.44 5116.85 19660.80 00:05:59.897 00:05:59.897 real 0m2.781s 00:05:59.897 user 0m2.483s 00:05:59.897 sys 0m0.184s 00:05:59.897 01:18:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.897 01:18:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:05:59.897 ************************************ 00:05:59.897 END TEST bdev_write_zeroes 00:05:59.897 ************************************ 00:05:59.897 01:18:55 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:59.897 01:18:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:05:59.897 01:18:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.897 01:18:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.897 ************************************ 00:05:59.897 START TEST bdev_json_nonenclosed 00:05:59.897 ************************************ 00:05:59.897 01:18:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:59.897 [2024-09-28 01:18:55.800975] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:59.897 [2024-09-28 01:18:55.801090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60977 ] 00:06:00.161 [2024-09-28 01:18:55.952051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.421 [2024-09-28 01:18:56.126205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.421 [2024-09-28 01:18:56.126277] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:00.421 [2024-09-28 01:18:56.126293] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:00.421 [2024-09-28 01:18:56.126301] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.681 00:06:00.681 real 0m0.669s 00:06:00.681 user 0m0.471s 00:06:00.681 sys 0m0.094s 00:06:00.681 01:18:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.681 ************************************ 00:06:00.681 END TEST bdev_json_nonenclosed 00:06:00.681 ************************************ 00:06:00.681 01:18:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:00.681 01:18:56 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:00.681 01:18:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:00.681 01:18:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.681 01:18:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:00.681 ************************************ 00:06:00.681 START TEST bdev_json_nonarray 00:06:00.681 ************************************ 00:06:00.681 01:18:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:00.681 [2024-09-28 01:18:56.521326] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:00.681 [2024-09-28 01:18:56.521437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61008 ] 00:06:00.942 [2024-09-28 01:18:56.672320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.942 [2024-09-28 01:18:56.854544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.942 [2024-09-28 01:18:56.854632] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:00.942 [2024-09-28 01:18:56.854648] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:00.942 [2024-09-28 01:18:56.854658] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.511 00:06:01.511 real 0m0.682s 00:06:01.511 user 0m0.490s 00:06:01.511 sys 0m0.087s 00:06:01.511 01:18:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.511 ************************************ 00:06:01.511 END TEST bdev_json_nonarray 00:06:01.511 ************************************ 00:06:01.511 01:18:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:01.511 01:18:57 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:01.511 00:06:01.511 real 0m37.727s 00:06:01.511 user 0m56.496s 00:06:01.511 sys 0m5.311s 00:06:01.511 01:18:57 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.511 ************************************ 00:06:01.511 END TEST blockdev_nvme 00:06:01.511 ************************************ 00:06:01.511 01:18:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:01.511 01:18:57 -- spdk/autotest.sh@209 -- # uname -s 00:06:01.511 01:18:57 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:01.511 01:18:57 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:01.511 01:18:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:01.511 01:18:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.511 01:18:57 -- common/autotest_common.sh@10 -- # set +x 00:06:01.511 ************************************ 00:06:01.511 START TEST blockdev_nvme_gpt 00:06:01.511 ************************************ 00:06:01.511 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:01.511 * Looking for test storage... 00:06:01.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:01.511 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:01.511 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:06:01.511 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:01.511 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.511 01:18:57 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:01.511 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.511 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:01.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.511 --rc genhtml_branch_coverage=1 00:06:01.511 --rc genhtml_function_coverage=1 00:06:01.512 --rc genhtml_legend=1 00:06:01.512 --rc geninfo_all_blocks=1 00:06:01.512 --rc geninfo_unexecuted_blocks=1 00:06:01.512 00:06:01.512 ' 00:06:01.512 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:01.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.512 --rc genhtml_branch_coverage=1 00:06:01.512 --rc genhtml_function_coverage=1 00:06:01.512 --rc genhtml_legend=1 00:06:01.512 --rc geninfo_all_blocks=1 00:06:01.512 --rc geninfo_unexecuted_blocks=1 00:06:01.512 00:06:01.512 ' 00:06:01.512 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:01.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.512 --rc genhtml_branch_coverage=1 00:06:01.512 --rc genhtml_function_coverage=1 00:06:01.512 --rc genhtml_legend=1 00:06:01.512 --rc geninfo_all_blocks=1 00:06:01.512 --rc geninfo_unexecuted_blocks=1 00:06:01.512 00:06:01.512 ' 00:06:01.512 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:01.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.512 --rc genhtml_branch_coverage=1 00:06:01.512 --rc genhtml_function_coverage=1 00:06:01.512 --rc genhtml_legend=1 00:06:01.512 --rc geninfo_all_blocks=1 00:06:01.512 --rc geninfo_unexecuted_blocks=1 00:06:01.512 00:06:01.512 ' 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61092 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61092 00:06:01.512 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61092 ']' 00:06:01.512 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.512 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.512 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.512 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.512 01:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:01.512 01:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:01.772 [2024-09-28 01:18:57.474456] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:01.772 [2024-09-28 01:18:57.474861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61092 ] 00:06:01.772 [2024-09-28 01:18:57.623493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.032 [2024-09-28 01:18:57.836025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.604 01:18:58 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.604 01:18:58 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:02.604 01:18:58 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:02.604 01:18:58 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:02.604 01:18:58 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:02.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.126 Waiting for block devices as requested 00:06:03.126 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:03.126 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:03.387 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:03.387 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:08.659 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:08.659 BYT; 00:06:08.659 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:08.659 BYT; 00:06:08.659 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:08.659 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:08.660 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:08.660 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:08.660 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:08.660 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:08.660 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:08.660 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:08.660 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:08.660 01:19:04 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:08.660 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:08.660 01:19:04 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:09.593 The operation has completed successfully. 00:06:09.593 01:19:05 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:10.524 The operation has completed successfully. 00:06:10.524 01:19:06 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:11.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:11.348 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:11.348 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:11.348 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:11.348 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:11.608 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:11.608 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.608 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:11.608 [] 00:06:11.608 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.608 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:11.608 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:11.608 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:11.608 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:11.608 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:11.608 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.608 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:11.870 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d2ae950a-13c7-4635-8b78-fb3e321972e3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d2ae950a-13c7-4635-8b78-fb3e321972e3",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "76d70f8f-d1c3-4675-970b-33fdf9f57d46"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "76d70f8f-d1c3-4675-970b-33fdf9f57d46",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e295063a-93a5-4200-a892-975bf6c22a50"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e295063a-93a5-4200-a892-975bf6c22a50",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "11148d66-19f3-474d-88cc-e2e7c3173db2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "11148d66-19f3-474d-88cc-e2e7c3173db2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "243f240c-9ffa-4939-947e-5445d6a13d27"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "243f240c-9ffa-4939-947e-5445d6a13d27",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:11.870 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:11.871 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:11.871 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:11.871 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:11.871 01:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61092 00:06:11.871 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61092 ']' 00:06:11.871 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61092 00:06:11.871 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:06:11.871 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.871 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61092 00:06:11.871 killing process with pid 61092 00:06:11.871 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.871 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.871 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61092' 00:06:11.871 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61092 00:06:11.871 01:19:07 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61092 00:06:13.249 01:19:09 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:13.249 01:19:09 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:13.249 01:19:09 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:13.249 01:19:09 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.507 01:19:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:13.507 ************************************ 00:06:13.507 START TEST bdev_hello_world 00:06:13.507 ************************************ 00:06:13.508 01:19:09 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:13.508 [2024-09-28 01:19:09.244986] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:13.508 [2024-09-28 01:19:09.245074] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61711 ] 00:06:13.508 [2024-09-28 01:19:09.387035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.765 [2024-09-28 01:19:09.528303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.329 [2024-09-28 01:19:10.015265] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:14.329 [2024-09-28 01:19:10.015303] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:14.329 [2024-09-28 01:19:10.015318] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:14.329 [2024-09-28 01:19:10.017268] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:14.329 [2024-09-28 01:19:10.017766] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:14.329 [2024-09-28 01:19:10.017790] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:14.329 [2024-09-28 01:19:10.017978] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:14.329 00:06:14.329 [2024-09-28 01:19:10.018003] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:14.896 00:06:14.896 real 0m1.463s 00:06:14.896 user 0m1.198s 00:06:14.896 sys 0m0.159s 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.896 ************************************ 00:06:14.896 END TEST bdev_hello_world 00:06:14.896 ************************************ 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:14.896 01:19:10 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:14.896 01:19:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:14.896 01:19:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.896 01:19:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:14.896 ************************************ 00:06:14.896 START TEST bdev_bounds 00:06:14.896 ************************************ 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61748 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.896 Process bdevio pid: 61748 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61748' 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61748 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61748 ']' 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.896 01:19:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:14.896 [2024-09-28 01:19:10.773242] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:14.896 [2024-09-28 01:19:10.773354] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61748 ] 00:06:15.155 [2024-09-28 01:19:10.922365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.155 [2024-09-28 01:19:11.062412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.155 [2024-09-28 01:19:11.063646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.155 [2024-09-28 01:19:11.063671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.719 01:19:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.719 01:19:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:15.719 01:19:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:15.977 I/O targets: 00:06:15.977 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:15.977 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:15.977 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:15.977 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.977 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.977 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.977 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:15.977 00:06:15.977 00:06:15.977 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.977 http://cunit.sourceforge.net/ 00:06:15.977 00:06:15.977 00:06:15.977 Suite: bdevio tests on: Nvme3n1 00:06:15.977 Test: blockdev write read block ...passed 00:06:15.977 Test: blockdev write zeroes read block ...passed 00:06:15.977 Test: blockdev write zeroes read no split ...passed 00:06:15.977 Test: blockdev write zeroes read split ...passed 00:06:15.977 Test: blockdev write zeroes read split partial ...passed 00:06:15.977 Test: blockdev reset ...[2024-09-28 01:19:11.730402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:15.977 [2024-09-28 01:19:11.732981] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:15.977 passed 00:06:15.977 Test: blockdev write read 8 blocks ...passed 00:06:15.977 Test: blockdev write read size > 128k ...passed 00:06:15.977 Test: blockdev write read invalid size ...passed 00:06:15.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.977 Test: blockdev write read max offset ...passed 00:06:15.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.977 Test: blockdev writev readv 8 blocks ...passed 00:06:15.977 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.977 Test: blockdev writev readv block ...passed 00:06:15.977 Test: blockdev writev readv size > 128k ...passed 00:06:15.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.977 Test: blockdev comparev and writev ...[2024-09-28 01:19:11.742281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4406000 len:0x1000 00:06:15.977 [2024-09-28 01:19:11.742653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.977 passed 00:06:15.977 Test: blockdev nvme passthru rw ...passed 00:06:15.977 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:19:11.743825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.977 [2024-09-28 01:19:11.744115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.977 passed 00:06:15.977 Test: blockdev nvme admin passthru ...passed 00:06:15.978 Test: blockdev copy ...passed 00:06:15.978 Suite: bdevio tests on: Nvme2n3 00:06:15.978 Test: blockdev write read block ...passed 00:06:15.978 Test: blockdev write zeroes read block ...passed 00:06:15.978 Test: blockdev write zeroes read no split ...passed 00:06:15.978 Test: blockdev write zeroes read split ...passed 00:06:15.978 Test: blockdev write zeroes read split partial ...passed 00:06:15.978 Test: blockdev reset ...[2024-09-28 01:19:11.797011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:15.978 [2024-09-28 01:19:11.799724] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:15.978 passed 00:06:15.978 Test: blockdev write read 8 blocks ...passed 00:06:15.978 Test: blockdev write read size > 128k ...passed 00:06:15.978 Test: blockdev write read invalid size ...passed 00:06:15.978 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.978 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.978 Test: blockdev write read max offset ...passed 00:06:15.978 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.978 Test: blockdev writev readv 8 blocks ...passed 00:06:15.978 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.978 Test: blockdev writev readv block ...passed 00:06:15.978 Test: blockdev writev readv size > 128k ...passed 00:06:15.978 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.978 Test: blockdev comparev and writev ...[2024-09-28 01:19:11.806991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd43c000 len:0x1000 00:06:15.978 [2024-09-28 01:19:11.807122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.978 passed 00:06:15.978 Test: blockdev nvme passthru rw ...passed 00:06:15.978 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:19:11.808088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.978 [2024-09-28 01:19:11.808178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.978 passed 00:06:15.978 Test: blockdev nvme admin passthru ...passed 00:06:15.978 Test: blockdev copy ...passed 00:06:15.978 Suite: bdevio tests on: Nvme2n2 00:06:15.978 Test: blockdev write read block ...passed 00:06:15.978 Test: blockdev write zeroes read block ...passed 00:06:15.978 Test: blockdev write zeroes read no split ...passed 00:06:15.978 Test: blockdev write zeroes read split ...passed 00:06:15.978 Test: blockdev write zeroes read split partial ...passed 00:06:15.978 Test: blockdev reset ...[2024-09-28 01:19:11.866206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:15.978 [2024-09-28 01:19:11.869207] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:15.978 passed 00:06:15.978 Test: blockdev write read 8 blocks ...passed 00:06:15.978 Test: blockdev write read size > 128k ...passed 00:06:15.978 Test: blockdev write read invalid size ...passed 00:06:15.978 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.978 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.978 Test: blockdev write read max offset ...passed 00:06:15.978 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.978 Test: blockdev writev readv 8 blocks ...passed 00:06:15.978 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.978 Test: blockdev writev readv block ...passed 00:06:15.978 Test: blockdev writev readv size > 128k ...passed 00:06:15.978 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.978 Test: blockdev comparev and writev ...[2024-09-28 01:19:11.877813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd436000 len:0x1000 00:06:15.978 [2024-09-28 01:19:11.878410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.978 passed 00:06:15.978 Test: blockdev nvme passthru rw ...passed 00:06:15.978 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:19:11.879877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.978 [2024-09-28 01:19:11.880386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.978 passed 00:06:15.978 Test: blockdev nvme admin passthru ...passed 00:06:15.978 Test: blockdev copy ...passed 00:06:15.978 Suite: bdevio tests on: Nvme2n1 00:06:15.978 Test: blockdev write read block ...passed 00:06:15.978 Test: blockdev write zeroes read block ...passed 00:06:15.978 Test: blockdev write zeroes read no split ...passed 00:06:16.236 Test: blockdev write zeroes read split ...passed 00:06:16.236 Test: blockdev write zeroes read split partial ...passed 00:06:16.236 Test: blockdev reset ...[2024-09-28 01:19:11.949200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:16.236 [2024-09-28 01:19:11.953709] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:16.236 passed 00:06:16.236 Test: blockdev write read 8 blocks ...passed 00:06:16.236 Test: blockdev write read size > 128k ...passed 00:06:16.236 Test: blockdev write read invalid size ...passed 00:06:16.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.236 Test: blockdev write read max offset ...passed 00:06:16.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.236 Test: blockdev writev readv 8 blocks ...passed 00:06:16.236 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.236 Test: blockdev writev readv block ...passed 00:06:16.236 Test: blockdev writev readv size > 128k ...passed 00:06:16.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.236 Test: blockdev comparev and writev ...[2024-09-28 01:19:11.961498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd432000 len:0x1000 00:06:16.236 passed 00:06:16.236 Test: blockdev nvme passthru rw ...passed 00:06:16.236 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:19:11.961693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.236 [2024-09-28 01:19:11.962160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.236 [2024-09-28 01:19:11.962306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.236 passed 00:06:16.236 Test: blockdev nvme admin passthru ...passed 00:06:16.236 Test: blockdev copy ...passed 00:06:16.236 Suite: bdevio tests on: Nvme1n1p2 00:06:16.236 Test: blockdev write read block ...passed 00:06:16.236 Test: blockdev write zeroes read block ...passed 00:06:16.236 Test: blockdev write zeroes read no split ...passed 00:06:16.236 Test: blockdev write zeroes read split ...passed 00:06:16.236 Test: blockdev write zeroes read split partial ...passed 00:06:16.236 Test: blockdev reset ...[2024-09-28 01:19:12.018057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:16.236 [2024-09-28 01:19:12.020821] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:16.236 passed 00:06:16.236 Test: blockdev write read 8 blocks ...passed 00:06:16.236 Test: blockdev write read size > 128k ...passed 00:06:16.236 Test: blockdev write read invalid size ...passed 00:06:16.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.236 Test: blockdev write read max offset ...passed 00:06:16.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.236 Test: blockdev writev readv 8 blocks ...passed 00:06:16.236 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.236 Test: blockdev writev readv block ...passed 00:06:16.236 Test: blockdev writev readv size > 128k ...passed 00:06:16.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.236 Test: blockdev comparev and writev ...[2024-09-28 01:19:12.040827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2bd42e000 len:0x1000 00:06:16.236 [2024-09-28 01:19:12.041020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.236 passed 00:06:16.236 Test: blockdev nvme passthru rw ...passed 00:06:16.236 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.236 Test: blockdev nvme admin passthru ...passed 00:06:16.236 Test: blockdev copy ...passed 00:06:16.236 Suite: bdevio tests on: Nvme1n1p1 00:06:16.236 Test: blockdev write read block ...passed 00:06:16.236 Test: blockdev write zeroes read block ...passed 00:06:16.236 Test: blockdev write zeroes read no split ...passed 00:06:16.236 Test: blockdev write zeroes read split ...passed 00:06:16.236 Test: blockdev write zeroes read split partial ...passed 00:06:16.236 Test: blockdev reset ...[2024-09-28 01:19:12.092492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:16.236 passed 00:06:16.236 Test: blockdev write read 8 blocks ...[2024-09-28 01:19:12.096821] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:16.236 passed 00:06:16.236 Test: blockdev write read size > 128k ...passed 00:06:16.236 Test: blockdev write read invalid size ...passed 00:06:16.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.236 Test: blockdev write read max offset ...passed 00:06:16.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.236 Test: blockdev writev readv 8 blocks ...passed 00:06:16.236 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.236 Test: blockdev writev readv block ...passed 00:06:16.236 Test: blockdev writev readv size > 128k ...passed 00:06:16.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.236 Test: blockdev comparev and writev ...[2024-09-28 01:19:12.114467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x290e0e000 len:0x1000 00:06:16.236 passed 00:06:16.236 Test: blockdev nvme passthru rw ...passed 00:06:16.236 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.236 Test: blockdev nvme admin passthru ...passed 00:06:16.236 Test: blockdev copy ...[2024-09-28 01:19:12.114669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.236 passed 00:06:16.236 Suite: bdevio tests on: Nvme0n1 00:06:16.236 Test: blockdev write read block ...passed 00:06:16.236 Test: blockdev write zeroes read block ...passed 00:06:16.236 Test: blockdev write zeroes read no split ...passed 00:06:16.236 Test: blockdev write zeroes read split ...passed 00:06:16.495 Test: blockdev write zeroes read split partial ...passed 00:06:16.495 Test: blockdev reset ...[2024-09-28 01:19:12.169821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:16.495 [2024-09-28 01:19:12.172586] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:16.495 passed 00:06:16.495 Test: blockdev write read 8 blocks ...passed 00:06:16.495 Test: blockdev write read size > 128k ...passed 00:06:16.495 Test: blockdev write read invalid size ...passed 00:06:16.495 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.495 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.495 Test: blockdev write read max offset ...passed 00:06:16.495 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.495 Test: blockdev writev readv 8 blocks ...passed 00:06:16.495 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.495 Test: blockdev writev readv block ...passed 00:06:16.495 Test: blockdev writev readv size > 128k ...passed 00:06:16.495 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.495 Test: blockdev comparev and writev ...passed 00:06:16.495 Test: blockdev nvme passthru rw ...[2024-09-28 01:19:12.180934] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:16.495 separate metadata which is not supported yet. 00:06:16.495 passed 00:06:16.495 Test: blockdev nvme passthru vendor specific ...[2024-09-28 01:19:12.181783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:16.495 [2024-09-28 01:19:12.181864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:16.495 passed 00:06:16.495 Test: blockdev nvme admin passthru ...passed 00:06:16.495 Test: blockdev copy ...passed 00:06:16.495 00:06:16.495 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.495 suites 7 7 n/a 0 0 00:06:16.495 tests 161 161 161 0 0 00:06:16.495 asserts 1025 1025 1025 0 n/a 00:06:16.495 00:06:16.495 Elapsed time = 1.312 seconds 00:06:16.495 0 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61748 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61748 ']' 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61748 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61748 00:06:16.495 killing process with pid 61748 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61748' 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61748 00:06:16.495 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61748 00:06:17.063 ************************************ 00:06:17.063 END TEST bdev_bounds 00:06:17.063 ************************************ 00:06:17.063 01:19:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:17.063 00:06:17.063 real 0m2.247s 00:06:17.063 user 0m5.548s 00:06:17.063 sys 0m0.281s 00:06:17.063 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.063 01:19:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:17.323 01:19:12 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:17.323 01:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:17.323 01:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.323 01:19:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:17.323 ************************************ 00:06:17.323 START TEST bdev_nbd 00:06:17.323 ************************************ 00:06:17.323 01:19:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:17.323 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:17.323 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:17.323 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.323 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:17.323 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61807 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61807 /var/tmp/spdk-nbd.sock 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61807 ']' 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.324 01:19:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:17.324 [2024-09-28 01:19:13.077028] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:17.324 [2024-09-28 01:19:13.077295] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.324 [2024-09-28 01:19:13.221307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.584 [2024-09-28 01:19:13.400138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:18.154 01:19:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.414 1+0 records in 00:06:18.414 1+0 records out 00:06:18.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134337 s, 3.0 MB/s 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:18.414 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.673 1+0 records in 00:06:18.673 1+0 records out 00:06:18.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690998 s, 5.9 MB/s 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:18.673 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:18.933 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:18.933 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:18.933 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:18.933 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:18.933 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.934 1+0 records in 00:06:18.934 1+0 records out 00:06:18.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011094 s, 3.7 MB/s 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:18.934 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.194 1+0 records in 00:06:19.194 1+0 records out 00:06:19.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776825 s, 5.3 MB/s 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:19.194 01:19:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.455 1+0 records in 00:06:19.455 1+0 records out 00:06:19.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123519 s, 3.3 MB/s 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:19.455 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.717 1+0 records in 00:06:19.717 1+0 records out 00:06:19.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108633 s, 3.8 MB/s 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.717 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.978 1+0 records in 00:06:19.978 1+0 records out 00:06:19.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685748 s, 6.0 MB/s 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd0", 00:06:19.978 "bdev_name": "Nvme0n1" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd1", 00:06:19.978 "bdev_name": "Nvme1n1p1" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd2", 00:06:19.978 "bdev_name": "Nvme1n1p2" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd3", 00:06:19.978 "bdev_name": "Nvme2n1" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd4", 00:06:19.978 "bdev_name": "Nvme2n2" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd5", 00:06:19.978 "bdev_name": "Nvme2n3" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd6", 00:06:19.978 "bdev_name": "Nvme3n1" 00:06:19.978 } 00:06:19.978 ]' 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd0", 00:06:19.978 "bdev_name": "Nvme0n1" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd1", 00:06:19.978 "bdev_name": "Nvme1n1p1" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd2", 00:06:19.978 "bdev_name": "Nvme1n1p2" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd3", 00:06:19.978 "bdev_name": "Nvme2n1" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd4", 00:06:19.978 "bdev_name": "Nvme2n2" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd5", 00:06:19.978 "bdev_name": "Nvme2n3" 00:06:19.978 }, 00:06:19.978 { 00:06:19.978 "nbd_device": "/dev/nbd6", 00:06:19.978 "bdev_name": "Nvme3n1" 00:06:19.978 } 00:06:19.978 ]' 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.978 01:19:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.239 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.239 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.239 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.239 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.239 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.239 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.239 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.239 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.239 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.239 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.510 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.510 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.510 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.510 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.510 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.510 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.510 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.510 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.510 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.510 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:20.771 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:20.771 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:20.771 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:20.771 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.771 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.771 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:20.771 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.771 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.771 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.771 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:21.031 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:21.332 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:21.332 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:21.332 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.332 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.332 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:21.332 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.332 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.332 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.332 01:19:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:21.332 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:21.332 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:21.332 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:21.332 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.332 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.332 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:21.332 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.332 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.332 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.332 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.595 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:21.856 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:22.115 /dev/nbd0 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.115 1+0 records in 00:06:22.115 1+0 records out 00:06:22.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479745 s, 8.5 MB/s 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:22.115 01:19:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:22.374 /dev/nbd1 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.374 1+0 records in 00:06:22.374 1+0 records out 00:06:22.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501854 s, 8.2 MB/s 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.374 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:22.375 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:22.633 /dev/nbd10 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.633 1+0 records in 00:06:22.633 1+0 records out 00:06:22.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394721 s, 10.4 MB/s 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:22.633 /dev/nbd11 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:22.633 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.892 1+0 records in 00:06:22.892 1+0 records out 00:06:22.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527443 s, 7.8 MB/s 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:22.892 /dev/nbd12 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.892 1+0 records in 00:06:22.892 1+0 records out 00:06:22.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411262 s, 10.0 MB/s 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:22.892 01:19:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:23.151 /dev/nbd13 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.151 1+0 records in 00:06:23.151 1+0 records out 00:06:23.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499321 s, 8.2 MB/s 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:23.151 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:23.409 /dev/nbd14 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.409 1+0 records in 00:06:23.409 1+0 records out 00:06:23.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367819 s, 11.1 MB/s 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.409 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.666 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.666 { 00:06:23.666 "nbd_device": "/dev/nbd0", 00:06:23.666 "bdev_name": "Nvme0n1" 00:06:23.666 }, 00:06:23.666 { 00:06:23.666 "nbd_device": "/dev/nbd1", 00:06:23.667 "bdev_name": "Nvme1n1p1" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd10", 00:06:23.667 "bdev_name": "Nvme1n1p2" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd11", 00:06:23.667 "bdev_name": "Nvme2n1" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd12", 00:06:23.667 "bdev_name": "Nvme2n2" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd13", 00:06:23.667 "bdev_name": "Nvme2n3" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd14", 00:06:23.667 "bdev_name": "Nvme3n1" 00:06:23.667 } 00:06:23.667 ]' 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd0", 00:06:23.667 "bdev_name": "Nvme0n1" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd1", 00:06:23.667 "bdev_name": "Nvme1n1p1" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd10", 00:06:23.667 "bdev_name": "Nvme1n1p2" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd11", 00:06:23.667 "bdev_name": "Nvme2n1" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd12", 00:06:23.667 "bdev_name": "Nvme2n2" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd13", 00:06:23.667 "bdev_name": "Nvme2n3" 00:06:23.667 }, 00:06:23.667 { 00:06:23.667 "nbd_device": "/dev/nbd14", 00:06:23.667 "bdev_name": "Nvme3n1" 00:06:23.667 } 00:06:23.667 ]' 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.667 /dev/nbd1 00:06:23.667 /dev/nbd10 00:06:23.667 /dev/nbd11 00:06:23.667 /dev/nbd12 00:06:23.667 /dev/nbd13 00:06:23.667 /dev/nbd14' 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.667 /dev/nbd1 00:06:23.667 /dev/nbd10 00:06:23.667 /dev/nbd11 00:06:23.667 /dev/nbd12 00:06:23.667 /dev/nbd13 00:06:23.667 /dev/nbd14' 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:23.667 256+0 records in 00:06:23.667 256+0 records out 00:06:23.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619642 s, 169 MB/s 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.667 256+0 records in 00:06:23.667 256+0 records out 00:06:23.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0627176 s, 16.7 MB/s 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.667 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.925 256+0 records in 00:06:23.925 256+0 records out 00:06:23.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0656879 s, 16.0 MB/s 00:06:23.925 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.925 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:23.925 256+0 records in 00:06:23.925 256+0 records out 00:06:23.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.064629 s, 16.2 MB/s 00:06:23.925 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.925 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:23.925 256+0 records in 00:06:23.925 256+0 records out 00:06:23.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0627115 s, 16.7 MB/s 00:06:23.925 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.925 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:24.186 256+0 records in 00:06:24.186 256+0 records out 00:06:24.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0828737 s, 12.7 MB/s 00:06:24.186 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.186 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:24.186 256+0 records in 00:06:24.186 256+0 records out 00:06:24.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0888899 s, 11.8 MB/s 00:06:24.186 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.186 01:19:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:24.186 256+0 records in 00:06:24.186 256+0 records out 00:06:24.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133256 s, 7.9 MB/s 00:06:24.186 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:24.186 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:24.186 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.186 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.186 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:24.186 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.186 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.186 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.186 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.446 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.707 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:24.968 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:24.968 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:24.968 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:24.968 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.968 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.968 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:24.968 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.968 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.968 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.968 01:19:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:25.230 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:25.230 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:25.230 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:25.230 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.230 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.230 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:25.230 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.230 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.230 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.230 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:25.491 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:25.491 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:25.491 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:25.491 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.491 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.491 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:25.491 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.491 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.491 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.491 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.752 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.014 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.014 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.014 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.014 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.014 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.014 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.014 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.014 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:26.015 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.015 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.015 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.015 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.015 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.015 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:26.015 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.015 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:26.015 01:19:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:26.273 malloc_lvol_verify 00:06:26.274 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:26.532 f53046c2-7a02-42ee-b8dc-1255632a90eb 00:06:26.532 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:26.790 1ef01865-8adb-4f96-a689-e5fd80115971 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:26.790 /dev/nbd0 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:26.790 mke2fs 1.47.0 (5-Feb-2023) 00:06:26.790 Discarding device blocks: 0/4096 done 00:06:26.790 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:26.790 00:06:26.790 Allocating group tables: 0/1 done 00:06:26.790 Writing inode tables: 0/1 done 00:06:26.790 Creating journal (1024 blocks): done 00:06:26.790 Writing superblocks and filesystem accounting information: 0/1 done 00:06:26.790 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.790 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61807 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61807 ']' 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61807 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61807 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.049 killing process with pid 61807 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61807' 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61807 00:06:27.049 01:19:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61807 00:06:27.986 01:19:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:27.986 00:06:27.986 real 0m10.552s 00:06:27.986 user 0m15.069s 00:06:27.986 sys 0m3.449s 00:06:27.986 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.986 01:19:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:27.986 ************************************ 00:06:27.986 END TEST bdev_nbd 00:06:27.986 ************************************ 00:06:27.986 01:19:23 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:27.986 01:19:23 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:27.986 01:19:23 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:27.986 skipping fio tests on NVMe due to multi-ns failures. 00:06:27.986 01:19:23 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:27.986 01:19:23 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:27.986 01:19:23 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:27.986 01:19:23 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:27.986 01:19:23 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.987 01:19:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:27.987 ************************************ 00:06:27.987 START TEST bdev_verify 00:06:27.987 ************************************ 00:06:27.987 01:19:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:27.987 [2024-09-28 01:19:23.673036] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:27.987 [2024-09-28 01:19:23.673153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62214 ] 00:06:27.987 [2024-09-28 01:19:23.822095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.245 [2024-09-28 01:19:23.974804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.245 [2024-09-28 01:19:23.974902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.817 Running I/O for 5 seconds... 00:06:33.986 19968.00 IOPS, 78.00 MiB/s 21600.00 IOPS, 84.38 MiB/s 21034.67 IOPS, 82.17 MiB/s 20704.00 IOPS, 80.88 MiB/s 20377.60 IOPS, 79.60 MiB/s 00:06:33.986 Latency(us) 00:06:33.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.986 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x0 length 0xbd0bd 00:06:33.986 Nvme0n1 : 5.08 1410.04 5.51 0.00 0.00 90600.22 17039.36 88322.36 00:06:33.986 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:33.986 Nvme0n1 : 5.08 1460.59 5.71 0.00 0.00 87425.11 17845.96 94371.84 00:06:33.986 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x0 length 0x4ff80 00:06:33.986 Nvme1n1p1 : 5.09 1408.72 5.50 0.00 0.00 90485.82 19459.15 86305.87 00:06:33.986 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:33.986 Nvme1n1p1 : 5.09 1459.96 5.70 0.00 0.00 87301.32 19761.62 88725.66 00:06:33.986 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x0 length 0x4ff7f 00:06:33.986 Nvme1n1p2 : 5.09 1407.43 5.50 0.00 0.00 90309.65 21878.94 82676.18 00:06:33.986 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:33.986 Nvme1n1p2 : 5.09 1459.40 5.70 0.00 0.00 87112.75 21273.99 79449.80 00:06:33.986 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x0 length 0x80000 00:06:33.986 Nvme2n1 : 5.10 1406.17 5.49 0.00 0.00 90189.76 24298.73 80659.69 00:06:33.986 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x80000 length 0x80000 00:06:33.986 Nvme2n1 : 5.09 1458.97 5.70 0.00 0.00 86964.23 22483.89 71787.13 00:06:33.986 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x0 length 0x80000 00:06:33.986 Nvme2n2 : 5.10 1405.00 5.49 0.00 0.00 90115.09 21677.29 80256.39 00:06:33.986 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x80000 length 0x80000 00:06:33.986 Nvme2n2 : 5.09 1457.71 5.69 0.00 0.00 86801.74 23290.49 70980.53 00:06:33.986 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x0 length 0x80000 00:06:33.986 Nvme2n3 : 5.11 1403.92 5.48 0.00 0.00 90028.36 20467.40 82272.89 00:06:33.986 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x80000 length 0x80000 00:06:33.986 Nvme2n3 : 5.10 1456.44 5.69 0.00 0.00 86716.48 20265.75 72593.72 00:06:33.986 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x0 length 0x20000 00:06:33.986 Nvme3n1 : 5.11 1403.53 5.48 0.00 0.00 89856.73 16736.89 87515.77 00:06:33.986 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.986 Verification LBA range: start 0x20000 length 0x20000 00:06:33.986 Nvme3n1 : 5.10 1455.24 5.68 0.00 0.00 86620.37 17946.78 73400.32 00:06:33.986 =================================================================================================================== 00:06:33.986 Total : 20053.13 78.33 0.00 0.00 88580.74 16736.89 94371.84 00:06:35.368 00:06:35.369 real 0m7.444s 00:06:35.369 user 0m13.736s 00:06:35.369 sys 0m0.230s 00:06:35.369 01:19:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.369 ************************************ 00:06:35.369 END TEST bdev_verify 00:06:35.369 ************************************ 00:06:35.369 01:19:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:35.369 01:19:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:35.369 01:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:35.369 01:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.369 01:19:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.369 ************************************ 00:06:35.369 START TEST bdev_verify_big_io 00:06:35.369 ************************************ 00:06:35.369 01:19:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:35.369 [2024-09-28 01:19:31.186825] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:35.369 [2024-09-28 01:19:31.186937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62312 ] 00:06:35.630 [2024-09-28 01:19:31.339892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.630 [2024-09-28 01:19:31.516693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.630 [2024-09-28 01:19:31.516771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.571 Running I/O for 5 seconds... 00:06:42.734 1097.00 IOPS, 68.56 MiB/s 2397.50 IOPS, 149.84 MiB/s 3202.00 IOPS, 200.12 MiB/s 00:06:42.734 Latency(us) 00:06:42.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.734 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x0 length 0xbd0b 00:06:42.734 Nvme0n1 : 5.94 103.50 6.47 0.00 0.00 1171316.94 19559.98 1426063.36 00:06:42.734 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:42.734 Nvme0n1 : 5.81 104.59 6.54 0.00 0.00 1185791.83 33070.47 1400252.26 00:06:42.734 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x0 length 0x4ff8 00:06:42.734 Nvme1n1p1 : 5.94 103.81 6.49 0.00 0.00 1126850.54 106470.79 1206669.00 00:06:42.734 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x4ff8 length 0x4ff8 00:06:42.734 Nvme1n1p1 : 5.95 102.92 6.43 0.00 0.00 1140663.80 78239.90 1206669.00 00:06:42.734 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x0 length 0x4ff7 00:06:42.734 Nvme1n1p2 : 5.95 106.64 6.66 0.00 0.00 1070856.65 130668.70 1006632.96 00:06:42.734 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x4ff7 length 0x4ff7 00:06:42.734 Nvme1n1p2 : 5.95 107.59 6.72 0.00 0.00 1072991.94 130668.70 1103424.59 00:06:42.734 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x0 length 0x8000 00:06:42.734 Nvme2n1 : 6.08 109.44 6.84 0.00 0.00 1005962.82 128248.91 1025991.29 00:06:42.734 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x8000 length 0x8000 00:06:42.734 Nvme2n1 : 6.05 105.91 6.62 0.00 0.00 1031896.37 79046.50 1135688.47 00:06:42.734 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x0 length 0x8000 00:06:42.734 Nvme2n2 : 6.17 118.89 7.43 0.00 0.00 907850.22 16232.76 1058255.16 00:06:42.734 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x8000 length 0x8000 00:06:42.734 Nvme2n2 : 6.09 115.53 7.22 0.00 0.00 934804.30 39119.95 1174405.12 00:06:42.734 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x0 length 0x8000 00:06:42.734 Nvme2n3 : 6.19 115.14 7.20 0.00 0.00 902521.50 24903.68 1974549.27 00:06:42.734 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x8000 length 0x8000 00:06:42.734 Nvme2n3 : 6.16 124.66 7.79 0.00 0.00 839049.85 36296.86 1200216.22 00:06:42.734 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x0 length 0x2000 00:06:42.734 Nvme3n1 : 6.28 137.49 8.59 0.00 0.00 730645.01 1008.25 2026171.47 00:06:42.734 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:42.734 Verification LBA range: start 0x2000 length 0x2000 00:06:42.734 Nvme3n1 : 6.26 146.75 9.17 0.00 0.00 690131.53 189.05 1226027.32 00:06:42.735 =================================================================================================================== 00:06:42.735 Total : 1602.85 100.18 0.00 0.00 966603.55 189.05 2026171.47 00:06:44.643 00:06:44.643 real 0m8.986s 00:06:44.643 user 0m16.692s 00:06:44.643 sys 0m0.244s 00:06:44.643 ************************************ 00:06:44.643 END TEST bdev_verify_big_io 00:06:44.643 01:19:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.643 01:19:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:44.643 ************************************ 00:06:44.643 01:19:40 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.643 01:19:40 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:44.643 01:19:40 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.643 01:19:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.643 ************************************ 00:06:44.643 START TEST bdev_write_zeroes 00:06:44.643 ************************************ 00:06:44.643 01:19:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.643 [2024-09-28 01:19:40.238889] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:44.643 [2024-09-28 01:19:40.239004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62429 ] 00:06:44.643 [2024-09-28 01:19:40.385030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.643 [2024-09-28 01:19:40.562567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.216 Running I/O for 1 seconds... 00:06:46.595 62272.00 IOPS, 243.25 MiB/s 00:06:46.595 Latency(us) 00:06:46.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.595 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.595 Nvme0n1 : 1.03 8850.82 34.57 0.00 0.00 14425.83 7461.02 27625.94 00:06:46.595 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.595 Nvme1n1p1 : 1.03 8839.80 34.53 0.00 0.00 14422.77 10637.00 26819.35 00:06:46.595 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.595 Nvme1n1p2 : 1.03 8829.08 34.49 0.00 0.00 14309.46 10637.00 24298.73 00:06:46.595 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.595 Nvme2n1 : 1.03 8819.13 34.45 0.00 0.00 14272.80 9679.16 23693.78 00:06:46.595 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.595 Nvme2n2 : 1.03 8847.97 34.56 0.00 0.00 14213.86 8418.86 23391.31 00:06:46.595 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.595 Nvme2n3 : 1.03 8806.42 34.40 0.00 0.00 14242.03 7410.61 23996.26 00:06:46.595 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:46.595 Nvme3n1 : 1.03 8796.56 34.36 0.00 0.00 14240.27 7662.67 25811.10 00:06:46.595 =================================================================================================================== 00:06:46.595 Total : 61789.80 241.37 0.00 0.00 14303.77 7410.61 27625.94 00:06:47.165 00:06:47.165 real 0m2.849s 00:06:47.165 user 0m2.546s 00:06:47.165 sys 0m0.188s 00:06:47.165 01:19:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.165 01:19:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:47.165 ************************************ 00:06:47.165 END TEST bdev_write_zeroes 00:06:47.165 ************************************ 00:06:47.165 01:19:43 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:47.165 01:19:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:47.165 01:19:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.165 01:19:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.165 ************************************ 00:06:47.165 START TEST bdev_json_nonenclosed 00:06:47.165 ************************************ 00:06:47.165 01:19:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:47.425 [2024-09-28 01:19:43.153454] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:47.425 [2024-09-28 01:19:43.153578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62482 ] 00:06:47.425 [2024-09-28 01:19:43.304931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.685 [2024-09-28 01:19:43.526319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.685 [2024-09-28 01:19:43.526424] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:47.685 [2024-09-28 01:19:43.526444] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:47.685 [2024-09-28 01:19:43.526454] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.945 00:06:47.945 real 0m0.753s 00:06:47.945 user 0m0.514s 00:06:47.945 sys 0m0.131s 00:06:47.945 01:19:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.945 ************************************ 00:06:47.945 END TEST bdev_json_nonenclosed 00:06:47.945 ************************************ 00:06:47.945 01:19:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:48.206 01:19:43 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:48.206 01:19:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:48.206 01:19:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.206 01:19:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.206 ************************************ 00:06:48.206 START TEST bdev_json_nonarray 00:06:48.206 ************************************ 00:06:48.206 01:19:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:48.206 [2024-09-28 01:19:43.973107] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:48.206 [2024-09-28 01:19:43.973283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62513 ] 00:06:48.206 [2024-09-28 01:19:44.128751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.466 [2024-09-28 01:19:44.357247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.466 [2024-09-28 01:19:44.357361] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:48.466 [2024-09-28 01:19:44.357381] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:48.466 [2024-09-28 01:19:44.357392] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.038 00:06:49.038 real 0m0.775s 00:06:49.038 user 0m0.537s 00:06:49.038 sys 0m0.130s 00:06:49.038 ************************************ 00:06:49.038 END TEST bdev_json_nonarray 00:06:49.038 ************************************ 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:49.038 01:19:44 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:06:49.038 01:19:44 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:06:49.038 01:19:44 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:06:49.038 01:19:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.038 01:19:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.038 01:19:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.038 ************************************ 00:06:49.038 START TEST bdev_gpt_uuid 00:06:49.038 ************************************ 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62544 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62544 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62544 ']' 00:06:49.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:49.038 01:19:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:49.038 [2024-09-28 01:19:44.821460] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:49.038 [2024-09-28 01:19:44.821615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62544 ] 00:06:49.299 [2024-09-28 01:19:44.973287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.299 [2024-09-28 01:19:45.197232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.238 01:19:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.238 01:19:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:06:50.238 01:19:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:50.238 01:19:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.238 01:19:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:50.238 Some configs were skipped because the RPC state that can call them passed over. 00:06:50.238 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.238 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:06:50.238 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.238 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:06:50.559 { 00:06:50.559 "name": "Nvme1n1p1", 00:06:50.559 "aliases": [ 00:06:50.559 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:06:50.559 ], 00:06:50.559 "product_name": "GPT Disk", 00:06:50.559 "block_size": 4096, 00:06:50.559 "num_blocks": 655104, 00:06:50.559 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:50.559 "assigned_rate_limits": { 00:06:50.559 "rw_ios_per_sec": 0, 00:06:50.559 "rw_mbytes_per_sec": 0, 00:06:50.559 "r_mbytes_per_sec": 0, 00:06:50.559 "w_mbytes_per_sec": 0 00:06:50.559 }, 00:06:50.559 "claimed": false, 00:06:50.559 "zoned": false, 00:06:50.559 "supported_io_types": { 00:06:50.559 "read": true, 00:06:50.559 "write": true, 00:06:50.559 "unmap": true, 00:06:50.559 "flush": true, 00:06:50.559 "reset": true, 00:06:50.559 "nvme_admin": false, 00:06:50.559 "nvme_io": false, 00:06:50.559 "nvme_io_md": false, 00:06:50.559 "write_zeroes": true, 00:06:50.559 "zcopy": false, 00:06:50.559 "get_zone_info": false, 00:06:50.559 "zone_management": false, 00:06:50.559 "zone_append": false, 00:06:50.559 "compare": true, 00:06:50.559 "compare_and_write": false, 00:06:50.559 "abort": true, 00:06:50.559 "seek_hole": false, 00:06:50.559 "seek_data": false, 00:06:50.559 "copy": true, 00:06:50.559 "nvme_iov_md": false 00:06:50.559 }, 00:06:50.559 "driver_specific": { 00:06:50.559 "gpt": { 00:06:50.559 "base_bdev": "Nvme1n1", 00:06:50.559 "offset_blocks": 256, 00:06:50.559 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:06:50.559 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:50.559 "partition_name": "SPDK_TEST_first" 00:06:50.559 } 00:06:50.559 } 00:06:50.559 } 00:06:50.559 ]' 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:06:50.559 { 00:06:50.559 "name": "Nvme1n1p2", 00:06:50.559 "aliases": [ 00:06:50.559 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:06:50.559 ], 00:06:50.559 "product_name": "GPT Disk", 00:06:50.559 "block_size": 4096, 00:06:50.559 "num_blocks": 655103, 00:06:50.559 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:50.559 "assigned_rate_limits": { 00:06:50.559 "rw_ios_per_sec": 0, 00:06:50.559 "rw_mbytes_per_sec": 0, 00:06:50.559 "r_mbytes_per_sec": 0, 00:06:50.559 "w_mbytes_per_sec": 0 00:06:50.559 }, 00:06:50.559 "claimed": false, 00:06:50.559 "zoned": false, 00:06:50.559 "supported_io_types": { 00:06:50.559 "read": true, 00:06:50.559 "write": true, 00:06:50.559 "unmap": true, 00:06:50.559 "flush": true, 00:06:50.559 "reset": true, 00:06:50.559 "nvme_admin": false, 00:06:50.559 "nvme_io": false, 00:06:50.559 "nvme_io_md": false, 00:06:50.559 "write_zeroes": true, 00:06:50.559 "zcopy": false, 00:06:50.559 "get_zone_info": false, 00:06:50.559 "zone_management": false, 00:06:50.559 "zone_append": false, 00:06:50.559 "compare": true, 00:06:50.559 "compare_and_write": false, 00:06:50.559 "abort": true, 00:06:50.559 "seek_hole": false, 00:06:50.559 "seek_data": false, 00:06:50.559 "copy": true, 00:06:50.559 "nvme_iov_md": false 00:06:50.559 }, 00:06:50.559 "driver_specific": { 00:06:50.559 "gpt": { 00:06:50.559 "base_bdev": "Nvme1n1", 00:06:50.559 "offset_blocks": 655360, 00:06:50.559 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:06:50.559 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:50.559 "partition_name": "SPDK_TEST_second" 00:06:50.559 } 00:06:50.559 } 00:06:50.559 } 00:06:50.559 ]' 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62544 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62544 ']' 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62544 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62544 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.559 killing process with pid 62544 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62544' 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62544 00:06:50.559 01:19:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62544 00:06:52.484 00:06:52.484 real 0m3.447s 00:06:52.484 user 0m3.485s 00:06:52.485 sys 0m0.456s 00:06:52.485 ************************************ 00:06:52.485 END TEST bdev_gpt_uuid 00:06:52.485 ************************************ 00:06:52.485 01:19:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.485 01:19:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:52.485 01:19:48 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:06:52.485 01:19:48 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:52.485 01:19:48 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:06:52.485 01:19:48 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:52.485 01:19:48 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:52.485 01:19:48 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:06:52.485 01:19:48 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:06:52.485 01:19:48 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:06:52.485 01:19:48 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:52.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:53.001 Waiting for block devices as requested 00:06:53.001 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.001 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.001 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.001 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:58.301 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:58.301 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:06:58.301 01:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:06:58.301 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:58.301 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:58.301 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:58.301 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:58.301 01:19:54 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:06:58.301 00:06:58.301 real 0m56.968s 00:06:58.301 user 1m12.159s 00:06:58.301 sys 0m7.733s 00:06:58.301 01:19:54 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.301 ************************************ 00:06:58.301 END TEST blockdev_nvme_gpt 00:06:58.301 ************************************ 00:06:58.301 01:19:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.560 01:19:54 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:58.560 01:19:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.560 01:19:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.560 01:19:54 -- common/autotest_common.sh@10 -- # set +x 00:06:58.560 ************************************ 00:06:58.560 START TEST nvme 00:06:58.560 ************************************ 00:06:58.560 01:19:54 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:58.560 * Looking for test storage... 00:06:58.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:06:58.560 01:19:54 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:58.560 01:19:54 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.560 01:19:54 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:58.560 01:19:54 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.560 01:19:54 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.560 01:19:54 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.560 01:19:54 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.560 01:19:54 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.560 01:19:54 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.560 01:19:54 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.560 01:19:54 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.560 01:19:54 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.560 01:19:54 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.560 01:19:54 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.560 01:19:54 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.560 01:19:54 nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:58.560 01:19:54 nvme -- scripts/common.sh@345 -- # : 1 00:06:58.560 01:19:54 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.560 01:19:54 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.560 01:19:54 nvme -- scripts/common.sh@365 -- # decimal 1 00:06:58.560 01:19:54 nvme -- scripts/common.sh@353 -- # local d=1 00:06:58.560 01:19:54 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.560 01:19:54 nvme -- scripts/common.sh@355 -- # echo 1 00:06:58.560 01:19:54 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.560 01:19:54 nvme -- scripts/common.sh@366 -- # decimal 2 00:06:58.560 01:19:54 nvme -- scripts/common.sh@353 -- # local d=2 00:06:58.560 01:19:54 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.560 01:19:54 nvme -- scripts/common.sh@355 -- # echo 2 00:06:58.560 01:19:54 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.560 01:19:54 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.560 01:19:54 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.560 01:19:54 nvme -- scripts/common.sh@368 -- # return 0 00:06:58.560 01:19:54 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.560 01:19:54 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.560 --rc genhtml_branch_coverage=1 00:06:58.560 --rc genhtml_function_coverage=1 00:06:58.560 --rc genhtml_legend=1 00:06:58.560 --rc geninfo_all_blocks=1 00:06:58.560 --rc geninfo_unexecuted_blocks=1 00:06:58.560 00:06:58.560 ' 00:06:58.560 01:19:54 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.560 --rc genhtml_branch_coverage=1 00:06:58.560 --rc genhtml_function_coverage=1 00:06:58.560 --rc genhtml_legend=1 00:06:58.560 --rc geninfo_all_blocks=1 00:06:58.560 --rc geninfo_unexecuted_blocks=1 00:06:58.560 00:06:58.560 ' 00:06:58.560 01:19:54 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.560 --rc genhtml_branch_coverage=1 00:06:58.560 --rc genhtml_function_coverage=1 00:06:58.560 --rc genhtml_legend=1 00:06:58.560 --rc geninfo_all_blocks=1 00:06:58.560 --rc geninfo_unexecuted_blocks=1 00:06:58.560 00:06:58.560 ' 00:06:58.560 01:19:54 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.560 --rc genhtml_branch_coverage=1 00:06:58.560 --rc genhtml_function_coverage=1 00:06:58.560 --rc genhtml_legend=1 00:06:58.560 --rc geninfo_all_blocks=1 00:06:58.560 --rc geninfo_unexecuted_blocks=1 00:06:58.560 00:06:58.560 ' 00:06:58.560 01:19:54 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:59.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:59.387 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:59.387 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:59.387 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:59.648 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:59.648 01:19:55 nvme -- nvme/nvme.sh@79 -- # uname 00:06:59.648 01:19:55 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:06:59.648 01:19:55 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:06:59.648 01:19:55 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:06:59.648 01:19:55 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:06:59.648 01:19:55 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:06:59.648 01:19:55 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:06:59.648 Waiting for stub to ready for secondary processes... 00:06:59.648 01:19:55 nvme -- common/autotest_common.sh@1071 -- # stubpid=63179 00:06:59.648 01:19:55 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:06:59.648 01:19:55 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:06:59.648 01:19:55 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63179 ]] 00:06:59.648 01:19:55 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:06:59.648 01:19:55 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:06:59.648 [2024-09-28 01:19:55.446650] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:59.649 [2024-09-28 01:19:55.446770] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:00.590 [2024-09-28 01:19:56.205145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.590 [2024-09-28 01:19:56.378690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.590 [2024-09-28 01:19:56.379033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.590 [2024-09-28 01:19:56.379122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.590 [2024-09-28 01:19:56.392171] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:00.590 [2024-09-28 01:19:56.392221] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:00.590 [2024-09-28 01:19:56.405400] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:00.590 [2024-09-28 01:19:56.405632] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:00.590 [2024-09-28 01:19:56.408019] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:00.590 [2024-09-28 01:19:56.408242] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:00.590 [2024-09-28 01:19:56.408325] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:00.590 [2024-09-28 01:19:56.410350] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:00.590 [2024-09-28 01:19:56.410529] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:00.590 [2024-09-28 01:19:56.410600] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:00.590 [2024-09-28 01:19:56.412668] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:00.590 [2024-09-28 01:19:56.412813] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:00.590 [2024-09-28 01:19:56.412879] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:00.590 [2024-09-28 01:19:56.412932] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:00.590 [2024-09-28 01:19:56.412966] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:00.590 01:19:56 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:00.590 done. 00:07:00.590 01:19:56 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:00.590 01:19:56 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:00.590 01:19:56 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:00.590 01:19:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.590 01:19:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.590 ************************************ 00:07:00.590 START TEST nvme_reset 00:07:00.590 ************************************ 00:07:00.591 01:19:56 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:00.858 Initializing NVMe Controllers 00:07:00.858 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:00.858 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:00.858 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:00.858 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:00.858 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:00.858 00:07:00.858 real 0m0.180s 00:07:00.858 user 0m0.059s 00:07:00.858 sys 0m0.082s 00:07:00.858 01:19:56 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.858 ************************************ 00:07:00.858 01:19:56 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:00.858 END TEST nvme_reset 00:07:00.858 ************************************ 00:07:00.858 01:19:56 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:00.858 01:19:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.858 01:19:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.858 01:19:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.858 ************************************ 00:07:00.858 START TEST nvme_identify 00:07:00.858 ************************************ 00:07:00.858 01:19:56 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:00.858 01:19:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:00.859 01:19:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:00.859 01:19:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:00.859 01:19:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:00.859 01:19:56 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:00.859 01:19:56 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:00.859 01:19:56 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:00.859 01:19:56 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:00.859 01:19:56 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:00.859 01:19:56 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:00.859 01:19:56 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:00.859 01:19:56 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:01.156 ===================================================== 00:07:01.156 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:01.156 ===================================================== 00:07:01.156 Controller Capabilities/Features 00:07:01.156 ================================ 00:07:01.156 Vendor ID: 1b36 00:07:01.156 Subsystem Vendor ID: 1af4 00:07:01.156 Serial Number: 12340 00:07:01.156 Model Number: QEMU NVMe Ctrl 00:07:01.156 Firmware Version: 8.0.0 00:07:01.156 Recommended Arb Burst: 6 00:07:01.156 IEEE OUI Identifier: 00 54 52 00:07:01.156 Multi-path I/O 00:07:01.156 May have multiple subsystem ports: No 00:07:01.156 May have multiple controllers: No 00:07:01.156 Associated with SR-IOV VF: No 00:07:01.156 Max Data Transfer Size: 524288 00:07:01.156 Max Number of Namespaces: 256 00:07:01.156 Max Number of I/O Queues: 64 00:07:01.156 NVMe Specification Version (VS): 1.4 00:07:01.156 NVMe Specification Version (Identify): 1.4 00:07:01.156 Maximum Queue Entries: 2048 00:07:01.156 Contiguous Queues Required: Yes 00:07:01.156 Arbitration Mechanisms Supported 00:07:01.156 Weighted Round Robin: Not Supported 00:07:01.156 Vendor Specific: Not Supported 00:07:01.156 Reset Timeout: 7500 ms 00:07:01.156 Doorbell Stride: 4 bytes 00:07:01.156 NVM Subsystem Reset: Not Supported 00:07:01.156 Command Sets Supported 00:07:01.156 NVM Command Set: Supported 00:07:01.156 Boot Partition: Not Supported 00:07:01.156 Memory Page Size Minimum: 4096 bytes 00:07:01.156 Memory Page Size Maximum: 65536 bytes 00:07:01.156 Persistent Memory Region: Not Supported 00:07:01.156 Optional Asynchronous Events Supported 00:07:01.156 Namespace Attribute Notices: Supported 00:07:01.156 Firmware Activation Notices: Not Supported 00:07:01.156 ANA Change Notices: Not Supported 00:07:01.156 PLE Aggregate Log Change Notices: Not Supported 00:07:01.156 LBA Status Info Alert Notices: Not Supported 00:07:01.156 EGE Aggregate Log Change Notices: Not Supported 00:07:01.156 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.156 Zone Descriptor Change Notices: Not Supported 00:07:01.156 Discovery Log Change Notices: Not Supported 00:07:01.156 Controller Attributes 00:07:01.156 128-bit Host Identifier: Not Supported 00:07:01.156 Non-Operational Permissive Mode: Not Supported 00:07:01.156 NVM Sets: Not Supported 00:07:01.156 Read Recovery Levels: Not Supported 00:07:01.156 Endurance Groups: Not Supported 00:07:01.156 Predictable Latency Mode: Not Supported 00:07:01.156 Traffic Based Keep ALive: Not Supported 00:07:01.156 Namespace Granularity: Not Supported 00:07:01.156 SQ Associations: Not Supported 00:07:01.156 UUID List: Not Supported 00:07:01.156 Multi-Domain Subsystem: Not Supported 00:07:01.156 Fixed Capacity Management: Not Supported 00:07:01.156 Variable Capacity Management: Not Supported 00:07:01.156 Delete Endurance Group: Not Supported 00:07:01.156 Delete NVM Set: Not Supported 00:07:01.156 Extended LBA Formats Supported: Supported 00:07:01.156 Flexible Data Placement Supported: Not Supported 00:07:01.156 00:07:01.156 Controller Memory Buffer Support 00:07:01.156 ================================ 00:07:01.156 Supported: No 00:07:01.156 00:07:01.156 Persistent Memory Region Support 00:07:01.156 ================================ 00:07:01.156 Supported: No 00:07:01.156 00:07:01.156 Admin Command Set Attributes 00:07:01.156 ============================ 00:07:01.156 Security Send/Receive: Not Supported 00:07:01.156 Format NVM: Supported 00:07:01.156 Firmware Activate/Download: Not Supported 00:07:01.156 Namespace Management: Supported 00:07:01.156 Device Self-Test: Not Supported 00:07:01.156 Directives: Supported 00:07:01.156 NVMe-MI: Not Supported 00:07:01.156 Virtualization Management: Not Supported 00:07:01.156 Doorbell Buffer Config: Supported 00:07:01.156 Get LBA Status Capability: Not Supported 00:07:01.156 Command & Feature Lockdown Capability: Not Supported 00:07:01.156 Abort Command Limit: 4 00:07:01.156 Async Event Request Limit: 4 00:07:01.156 Number of Firmware Slots: N/A 00:07:01.156 Firmware Slot 1 Read-Only: N/A 00:07:01.156 Firmware Activation Without Reset: N/A 00:07:01.156 Multiple Update Detection Support: N/A 00:07:01.156 Firmware Update Granularity: No Information Provided 00:07:01.156 Per-Namespace SMART Log: Yes 00:07:01.157 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.157 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:01.157 Command Effects Log Page: Supported 00:07:01.157 Get Log Page Extended Data: Supported 00:07:01.157 Telemetry Log Pages: Not Supported 00:07:01.157 Persistent Event Log Pages: Not Supported 00:07:01.157 Supported Log Pages Log Page: May Support 00:07:01.157 Commands Supported & Effects Log Page: Not Supported 00:07:01.157 Feature Identifiers & Effects Log Page:May Support 00:07:01.157 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.157 Data Area 4 for Telemetry Log: Not Supported 00:07:01.157 Error Log Page Entries Supported: 1 00:07:01.157 Keep Alive: Not Supported 00:07:01.157 00:07:01.157 NVM Command Set Attributes 00:07:01.157 ========================== 00:07:01.157 Submission Queue Entry Size 00:07:01.157 Max: 64 00:07:01.157 Min: 64 00:07:01.157 Completion Queue Entry Size 00:07:01.157 Max: 16 00:07:01.157 Min: 16 00:07:01.157 Number of Namespaces: 256 00:07:01.157 Compare Command: Supported 00:07:01.157 Write Uncorrectable Command: Not Supported 00:07:01.157 Dataset Management Command: Supported 00:07:01.157 Write Zeroes Command: Supported 00:07:01.157 Set Features Save Field: Supported 00:07:01.157 Reservations: Not Supported 00:07:01.157 Timestamp: Supported 00:07:01.157 Copy: Supported 00:07:01.157 Volatile Write Cache: Present 00:07:01.157 Atomic Write Unit (Normal): 1 00:07:01.157 Atomic Write Unit (PFail): 1 00:07:01.157 Atomic Compare & Write Unit: 1 00:07:01.157 Fused Compare & Write: Not Supported 00:07:01.157 Scatter-Gather List 00:07:01.157 SGL Command Set: Supported 00:07:01.157 SGL Keyed: Not Supported 00:07:01.157 SGL Bit Bucket Descriptor: Not Supported 00:07:01.157 SGL Metadata Pointer: Not Supported 00:07:01.157 Oversized SGL: Not Supported 00:07:01.157 SGL Metadata Address: Not Supported 00:07:01.157 SGL Offset: Not Supported 00:07:01.157 Transport SGL Data Block: Not Supported 00:07:01.157 Replay Protected Memory Block: Not Supported 00:07:01.157 00:07:01.157 Firmware Slot Information 00:07:01.157 ========================= 00:07:01.157 Active slot: 1 00:07:01.157 Slot 1 Firmware Revision: 1.0 00:07:01.157 00:07:01.157 00:07:01.157 Commands Supported and Effects 00:07:01.157 ============================== 00:07:01.157 Admin Commands 00:07:01.157 -------------- 00:07:01.157 Delete I/O Submission Queue (00h): Supported 00:07:01.157 Create I/O Submission Queue (01h): Supported 00:07:01.157 Get Log Page (02h): Supported 00:07:01.157 Delete I/O Completion Queue (04h): Supported 00:07:01.157 Create I/O Completion Queue (05h): Supported 00:07:01.157 Identify (06h): Supported 00:07:01.157 Abort (08h): Supported 00:07:01.157 Set Features (09h): Supported 00:07:01.157 Get Features (0Ah): Supported 00:07:01.157 Asynchronous Event Request (0Ch): Supported 00:07:01.157 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.157 Directive Send (19h): Supported 00:07:01.157 Directive Receive (1Ah): Supported 00:07:01.157 Virtualization Management (1Ch): Supported 00:07:01.157 Doorbell Buffer Config (7Ch): Supported 00:07:01.157 Format NVM (80h): Supported LBA-Change 00:07:01.157 I/O Commands 00:07:01.157 ------------ 00:07:01.157 Flush (00h): Supported LBA-Change 00:07:01.157 Write (01h): Supported LBA-Change 00:07:01.157 Read (02h): Supported 00:07:01.157 Compare (05h): Supported 00:07:01.157 Write Zeroes (08h): Supported LBA-Change 00:07:01.157 Dataset Management (09h): Supported LBA-Change 00:07:01.157 Unknown (0Ch): Supported 00:07:01.157 Unknown (12h): Supported 00:07:01.157 Copy (19h): Supported LBA-Change 00:07:01.157 Unknown (1Dh): Supported LBA-Change 00:07:01.157 00:07:01.157 Error Log 00:07:01.157 ========= 00:07:01.157 00:07:01.157 Arbitration 00:07:01.157 =========== 00:07:01.157 Arbitration Burst: no limit 00:07:01.157 00:07:01.157 Power Management 00:07:01.157 ================ 00:07:01.157 Number of Power States: 1 00:07:01.157 Current Power State: Power State #0 00:07:01.157 Power State #0: 00:07:01.157 Max Power: 25.00 W 00:07:01.157 Non-Operational State: Operational 00:07:01.157 Entry Latency: 16 microseconds 00:07:01.157 Exit Latency: 4 microseconds 00:07:01.157 Relative Read Throughput: 0 00:07:01.157 Relative Read Latency: 0 00:07:01.157 Relative Write Throughput: 0 00:07:01.157 Relative Write Latency: 0 00:07:01.157 Idle Power: Not Reported 00:07:01.157 Active Power: Not Reported 00:07:01.157 Non-Operational Permissive Mode: Not Supported 00:07:01.157 00:07:01.157 Health Information 00:07:01.157 ================== 00:07:01.157 Critical Warnings: 00:07:01.157 Available Spare Space: OK 00:07:01.157 Temperature: OK 00:07:01.157 Device Reliability: OK 00:07:01.157 Read Only: No 00:07:01.157 Volatile Memory Backup: OK 00:07:01.157 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.157 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.157 Available Spare: 0% 00:07:01.157 Available Spare Threshold: 0% 00:07:01.157 Life Percentage Used: 0% 00:07:01.157 Data Units Read: 670 00:07:01.157 Data Units Written: 598 00:07:01.157 Host Read Commands: 35732 00:07:01.157 Host Write Commands: 35518 00:07:01.157 Controller Busy Time: 0 minutes 00:07:01.157 Power Cycles: 0 00:07:01.157 Power On Hours: 0 hours 00:07:01.157 Unsafe Shutdowns: 0 00:07:01.157 Unrecoverable Media Errors: 0 00:07:01.157 Lifetime Error Log Entries: 0 00:07:01.157 Warning Temperature Time: 0 minutes 00:07:01.157 Critical Temperature Time: 0 minutes 00:07:01.157 00:07:01.157 Number of Queues 00:07:01.157 ================ 00:07:01.157 Number of I/O Submission Queues: 64 00:07:01.157 Number of I/O Completion Queues: 64 00:07:01.157 00:07:01.157 ZNS Specific Controller Data 00:07:01.157 ============================ 00:07:01.157 Zone Append Size Limit: 0 00:07:01.157 00:07:01.157 00:07:01.157 Active Namespaces 00:07:01.157 ================= 00:07:01.157 Namespace ID:1 00:07:01.157 Error Recovery Timeout: Unlimited 00:07:01.157 Command Set Identifier: NVM (00h) 00:07:01.157 Deallocate: Supported 00:07:01.157 Deallocated/Unwritten Error: Supported 00:07:01.157 Deallocated Read Value: All 0x00 00:07:01.157 Deallocate in Write Zeroes: Not Supported 00:07:01.157 Deallocated Guard Field: 0xFFFF 00:07:01.157 Flush: Supported 00:07:01.157 Reservation: Not Supported 00:07:01.157 Metadata Transferred as: Separate Metadata Buffer 00:07:01.157 Namespace Sharing Capabilities: Private 00:07:01.157 Size (in LBAs): 1548666 (5GiB) 00:07:01.157 Capacity (in LBAs): 1548666 (5GiB) 00:07:01.157 Utilization (in LBAs): 1548666 (5GiB) 00:07:01.157 Thin Provisioning: Not Supported 00:07:01.157 Per-NS Atomic Units: No 00:07:01.157 Maximum Single Source Range Length: 128 00:07:01.157 Maximum Copy Length: 128 00:07:01.157 Maximum Source Range Count: 128 00:07:01.157 NGUID/EUI64 Never Reused: No 00:07:01.157 Namespace Write Protected: No 00:07:01.157 Number of LBA Formats: 8 00:07:01.157 Current LBA Format: LBA Format #07 00:07:01.157 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.157 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.157 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.157 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.157 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.157 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.157 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.157 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.157 00:07:01.158 NVM Specific Namespace Data 00:07:01.158 =========================== 00:07:01.158 Logical Block Storage Tag Mask: 0 00:07:01.158 Protection Information Capabilities: 00:07:01.158 16b Guard Protection Information Storage Tag Support: No 00:07:01.158 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.158 Storage Tag Check Read Support: No 00:07:01.158 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.158 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.158 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.158 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.158 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.158 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.158 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.158 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.158 ===================================================== 00:07:01.158 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:01.158 ===================================================== 00:07:01.158 Controller Capabilities/Features 00:07:01.158 ================================ 00:07:01.158 Vendor ID: 1b36 00:07:01.158 Subsystem Vendor ID: 1af4 00:07:01.158 Serial Number: 12341 00:07:01.158 Model Number: QEMU NVMe Ctrl 00:07:01.158 Firmware Version: 8.0.0 00:07:01.158 Recommended Arb Burst: 6 00:07:01.158 IEEE OUI Identifier: 00 54 52 00:07:01.158 Multi-path I/O 00:07:01.158 May have multiple subsystem ports: No 00:07:01.158 May have multiple controllers: No 00:07:01.158 Associated with SR-IOV VF: No 00:07:01.158 Max Data Transfer Size: 524288 00:07:01.158 Max Number of Namespaces: 256 00:07:01.158 Max Number of I/O Queues: 64 00:07:01.158 NVMe Specification Version (VS): 1.4 00:07:01.158 NVMe Specification Version (Identify): 1.4 00:07:01.158 Maximum Queue Entries: 2048 00:07:01.158 Contiguous Queues Required: Yes 00:07:01.158 Arbitration Mechanisms Supported 00:07:01.158 Weighted Round Robin: Not Supported 00:07:01.158 Vendor Specific: Not Supported 00:07:01.158 Reset Timeout: 7500 ms 00:07:01.158 Doorbell Stride: 4 bytes 00:07:01.158 NVM Subsystem Reset: Not Supported 00:07:01.158 Command Sets Supported 00:07:01.158 NVM Command Set: Supported 00:07:01.158 Boot Partition: Not Supported 00:07:01.158 Memory Page Size Minimum: 4096 bytes 00:07:01.158 Memory Page Size Maximum: 65536 bytes 00:07:01.158 Persistent Memory Region: Not Supported 00:07:01.158 Optional Asynchronous Events Supported 00:07:01.158 Namespace Attribute Notices: Supported 00:07:01.158 Firmware Activation Notices: Not Supported 00:07:01.158 ANA Change Notices: Not Supported 00:07:01.158 PLE Aggregate Log Change Notices: Not Supported 00:07:01.158 LBA Status Info Alert Notices: Not Supported 00:07:01.158 EGE Aggregate Log Change Notices: Not Supported 00:07:01.158 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.158 Zone Descriptor Change Notices: Not Supported 00:07:01.158 Discovery Log Change Notices: Not Supported 00:07:01.158 Controller Attributes 00:07:01.158 128-bit Host Identifier: Not Supported 00:07:01.158 Non-Operational Permissive Mode: Not Supported 00:07:01.158 NVM Sets: Not Supported 00:07:01.158 Read Recovery Levels: Not Supported 00:07:01.158 Endurance Groups: Not Supported 00:07:01.158 Predictable Latency Mode: Not Supported 00:07:01.158 Traffic Based Keep ALive: Not Supported 00:07:01.158 Namespace Granularity: Not Supported 00:07:01.158 SQ Associations: Not Supported 00:07:01.158 UUID List: Not Supported 00:07:01.158 Multi-Domain Subsystem: Not Supported 00:07:01.158 Fixed Capacity Management: Not Supported 00:07:01.158 Variable Capacity Management: Not Supported 00:07:01.158 Delete Endurance Group: Not Supported 00:07:01.158 Delete NVM Set: Not Supported 00:07:01.158 Extended LBA Formats Supported: Supported 00:07:01.158 Flexible Data Placement Supported: Not Supported 00:07:01.158 00:07:01.158 Controller Memory Buffer Support 00:07:01.158 ================================ 00:07:01.158 Supported: No 00:07:01.158 00:07:01.158 Persistent Memory Region Support 00:07:01.158 ================================ 00:07:01.158 Supported: No 00:07:01.158 00:07:01.158 Admin Command Set Attributes 00:07:01.158 ============================ 00:07:01.158 Security Send/Receive: Not Supported 00:07:01.158 Format NVM: Supported 00:07:01.158 Firmware Activate/Download: Not Supported 00:07:01.158 Namespace Management: Supported 00:07:01.158 Device Self-Test: Not Supported 00:07:01.158 Directives: Supported 00:07:01.158 NVMe-MI: Not Supported 00:07:01.158 Virtualization Management: Not Supported 00:07:01.158 Doorbell Buffer Config: Supported 00:07:01.158 Get LBA Status Capability: Not Supported 00:07:01.158 Command & Feature Lockdown Capability: Not Supported 00:07:01.158 Abort Command Limit: 4 00:07:01.158 Async Event Request Limit: 4 00:07:01.158 Number of Firmware Slots: N/A 00:07:01.158 Firmware Slot 1 Read-Only: N/A 00:07:01.158 Firmware Activation Without Reset: N/A 00:07:01.158 Multiple Update Detection Support: N/A 00:07:01.158 Firmware Update Granularity: No Information Provided 00:07:01.158 Per-Namespace SMART Log: Yes 00:07:01.158 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.158 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:01.158 Command Effects Log Page: Supported 00:07:01.158 Get Log Page Extended Data: Supported 00:07:01.158 Telemetry Log Pages: Not Supported 00:07:01.158 Persistent Event Log Pages: Not Supported 00:07:01.158 Supported Log Pages Log Page: May Support 00:07:01.158 Commands Supported & Effects Log Page: Not Supported 00:07:01.158 Feature Identifiers & Effects Log Page:May Support 00:07:01.158 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.158 Data Area 4 for Telemetry Log: Not Supported 00:07:01.158 Error Log Page Entries Supported: 1 00:07:01.158 Keep Alive: Not Supported 00:07:01.158 00:07:01.158 NVM Command Set Attributes 00:07:01.158 ========================== 00:07:01.158 Submission Queue Entry Size 00:07:01.158 Max: 64 00:07:01.158 Min: 64 00:07:01.158 Completion Queue Entry Size 00:07:01.158 Max: 16 00:07:01.158 Min: 16 00:07:01.158 Number of Namespaces: 256 00:07:01.158 Compare Command: Supported 00:07:01.158 Write Uncorrectable Command: Not Supported 00:07:01.158 Dataset Management Command: Supported 00:07:01.158 Write Zeroes Command: Supported 00:07:01.158 Set Features Save Field: Supported 00:07:01.158 Reservations: Not Supported 00:07:01.158 Timestamp: Supported 00:07:01.158 Copy: Supported 00:07:01.158 Volatile Write Cache: Present 00:07:01.158 Atomic Write Unit (Normal): 1 00:07:01.158 Atomic Write Unit (PFail): 1 00:07:01.158 Atomic Compare & Write Unit: 1 00:07:01.158 Fused Compare & Write: Not Supported 00:07:01.158 Scatter-Gather List 00:07:01.158 SGL Command Set: Supported 00:07:01.158 SGL Keyed: Not Supported 00:07:01.158 SGL Bit Bucket Descriptor: Not Supported 00:07:01.158 SGL Metadata Pointer: Not Supported 00:07:01.158 Oversized SGL: Not Supported 00:07:01.158 SGL Metadata Address: Not Supported 00:07:01.158 SGL Offset: Not Supported 00:07:01.158 Transport SGL Data Block: Not Supported 00:07:01.158 Replay Protected Memory Block: Not Supported 00:07:01.158 00:07:01.158 Firmware Slot Information 00:07:01.158 ========================= 00:07:01.158 Active slot: 1 00:07:01.158 Slot 1 Firmware Revision: 1.0 00:07:01.158 00:07:01.158 00:07:01.158 Commands Supported and Effects 00:07:01.158 ============================== 00:07:01.158 Admin Commands 00:07:01.158 -------------- 00:07:01.158 Delete I/O Submission Queue (00h): Supported 00:07:01.158 Create I/O Submission Queue (01h): Supported 00:07:01.158 Get Log Page (02h): Supported 00:07:01.158 Delete I/O Completion Queue (04h): Supported 00:07:01.158 Create I/O Completion Queue (05h): Supported 00:07:01.158 Identify (06h): Supported 00:07:01.158 Abort (08h): Supported 00:07:01.158 Set Features (09h): Supported 00:07:01.158 Get Features (0Ah): Supported 00:07:01.158 Asynchronous Event Request (0Ch): Supported 00:07:01.158 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.158 Directive Send (19h): Supported 00:07:01.158 Directive Receive (1Ah): Supported 00:07:01.158 Virtualization Management (1Ch): Supported 00:07:01.158 Doorbell Buffer Config (7Ch): Supported 00:07:01.158 Format NVM (80h): Supported LBA-Change 00:07:01.158 I/O Commands 00:07:01.158 ------------ 00:07:01.158 Flush (00h): Supported LBA-Change 00:07:01.158 Write (01h): Supported LBA-Change 00:07:01.158 Read (02h): Supported 00:07:01.158 Compare (05h): Supported 00:07:01.158 Write Zeroes (08h): Supported LBA-Change 00:07:01.158 Dataset Management (09h): Supported LBA-Change 00:07:01.158 Unknown (0Ch): Supported 00:07:01.158 Unknown (12h): Supported 00:07:01.158 Copy (19h): Supported LBA-Change 00:07:01.159 Unknown (1Dh): Supported LBA-Change 00:07:01.159 00:07:01.159 Error Log 00:07:01.159 ========= 00:07:01.159 00:07:01.159 Arbitration 00:07:01.159 =========== 00:07:01.159 Arbitration Burst: no limit 00:07:01.159 00:07:01.159 Power Management 00:07:01.159 ================ 00:07:01.159 Number of Power States: 1 00:07:01.159 Current Power State: Power State #0 00:07:01.159 Power State #0: 00:07:01.159 Max Power: 25.00 W 00:07:01.159 Non-Operational State: Operational 00:07:01.159 Entry Latency: 16 microseconds 00:07:01.159 Exit Latency: 4 microseconds 00:07:01.159 Relative Read Throughput: 0 00:07:01.159 Relative Read Latency: 0 00:07:01.159 Relative Write Throughput: 0 00:07:01.159 Relative Write Latency: 0 00:07:01.159 Idle Power: Not Reported 00:07:01.159 Active Power: Not Reported 00:07:01.159 Non-Operational Permissive Mode: Not Supported 00:07:01.159 00:07:01.159 Health Information 00:07:01.159 ================== 00:07:01.159 Critical Warnings: 00:07:01.159 Available Spare Space: OK 00:07:01.159 Temperature: OK 00:07:01.159 Device Reliability: OK 00:07:01.159 Read Only: No 00:07:01.159 Volatile Memory Backup: OK 00:07:01.159 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.159 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.159 Available Spare: 0% 00:07:01.159 Available Spare Threshold: 0% 00:07:01.159 Life Percentage Used: 0% 00:07:01.159 Data Units Read: 1017 00:07:01.159 Data Units Written: 884 00:07:01.159 Host Read Commands: 52707 00:07:01.159 Host Write Commands: 51493 00:07:01.159 Controller Busy Time: 0 minutes 00:07:01.159 Power Cycles: 0 00:07:01.159 Power On Hours: 0 hours 00:07:01.159 Unsafe Shutdowns: 0 00:07:01.159 Unrecoverable Media Errors: 0 00:07:01.159 Lifetime Error Log Entries: 0 00:07:01.159 Warning Temperature Time: 0 minutes 00:07:01.159 Critical Temperature Time: 0 minutes 00:07:01.159 00:07:01.159 Number of Queues 00:07:01.159 ================ 00:07:01.159 Number of I/O Submission Queues: 64 00:07:01.159 Number of I/O Completion Queues: 64 00:07:01.159 00:07:01.159 ZNS Specific Controller Data 00:07:01.159 ============================ 00:07:01.159 Zone Append Size Limit: 0 00:07:01.159 00:07:01.159 00:07:01.159 Active Namespaces 00:07:01.159 ================= 00:07:01.159 Namespace ID:1 00:07:01.159 Error Recovery Timeout: Unlimited 00:07:01.159 Command Set Identifier: NVM (00h) 00:07:01.159 Deallocate: Supported 00:07:01.159 Deallocated/Unwritten Error: Supported 00:07:01.159 Deallocated Read Value: All 0x00 00:07:01.159 Deallocate in Write Zeroes: Not Supported 00:07:01.159 Deallocated Guard Field: 0xFFFF 00:07:01.159 Flush: Supported 00:07:01.159 Reservation: Not Supported 00:07:01.159 Namespace Sharing Capabilities: Private 00:07:01.159 Size (in LBAs): 1310720 (5GiB) 00:07:01.159 Capacity (in LBAs): 1310720 (5GiB) 00:07:01.159 Utilization (in LBAs): 1310720 (5GiB) 00:07:01.159 Thin Provisioning: Not Supported 00:07:01.159 Per-NS Atomic Units: No 00:07:01.159 Maximum Single Source Range Length: 128 00:07:01.159 Maximum Copy Length: 128 00:07:01.159 Maximum Source Range Count: 128 00:07:01.159 NGUID/EUI64 Never Reused: No 00:07:01.159 Namespace Write Protected: No 00:07:01.159 Number of LBA Formats: 8 00:07:01.159 Current LBA Format: LBA Format #04 00:07:01.159 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.159 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.159 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.159 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.159 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.159 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.159 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.159 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.159 00:07:01.159 NVM Specific Namespace Data 00:07:01.159 =========================== 00:07:01.159 Logical Block Storage Tag Mask: 0 00:07:01.159 Protection Information Capabilities: 00:07:01.159 16b Guard Protection Information Storage Tag Support: No 00:07:01.159 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.159 Storage Tag Check Read Support: No 00:07:01.159 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.159 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.159 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.159 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.159 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.159 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.159 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.159 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.159 ===================================================== 00:07:01.159 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:01.159 ===================================================== 00:07:01.159 Controller Capabilities/Features 00:07:01.159 ================================ 00:07:01.159 Vendor ID: 1b36 00:07:01.159 Subsystem Vendor ID: 1af4 00:07:01.159 Serial Number: 12343 00:07:01.159 Model Number: QEMU NVMe Ctrl 00:07:01.159 Firmware Version: 8.0.0 00:07:01.159 Recommended Arb Burst: 6 00:07:01.159 IEEE OUI Identifier: 00 54 52 00:07:01.159 Multi-path I/O 00:07:01.159 May have multiple subsystem ports: No 00:07:01.159 May have multiple controllers: Yes 00:07:01.159 Associated with SR-IOV VF: No 00:07:01.159 Max Data Transfer Size: 524288 00:07:01.159 Max Number of Namespaces: 256 00:07:01.159 Max Number of I/O Queues: 64 00:07:01.159 NVMe Specification Version (VS): 1.4 00:07:01.159 NVMe Specification Version (Identify): 1.4 00:07:01.159 Maximum Queue Entries: 2048 00:07:01.159 Contiguous Queues Required: Yes 00:07:01.159 Arbitration Mechanisms Supported 00:07:01.159 Weighted Round Robin: Not Supported 00:07:01.159 Vendor Specific: Not Supported 00:07:01.159 Reset Timeout: 7500 ms 00:07:01.159 Doorbell Stride: 4 bytes 00:07:01.159 NVM Subsystem Reset: Not Supported 00:07:01.159 Command Sets Supported 00:07:01.159 NVM Command Set: Supported 00:07:01.159 Boot Partition: Not Supported 00:07:01.159 Memory Page Size Minimum: 4096 bytes 00:07:01.159 Memory Page Size Maximum: 65536 bytes 00:07:01.159 Persistent Memory Region: Not Supported 00:07:01.159 Optional Asynchronous Events Supported 00:07:01.159 Namespace Attribute Notices: Supported 00:07:01.159 Firmware Activation Notices: Not Supported 00:07:01.159 ANA Change Notices: Not Supported 00:07:01.159 PLE Aggregate Log Change Notices: Not Supported 00:07:01.159 LBA Status Info Alert Notices: Not Supported 00:07:01.159 EGE Aggregate Log Change Notices: Not Supported 00:07:01.159 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.159 Zone Descriptor Change Notices: Not Supported 00:07:01.159 Discovery Log Change Notices: Not Supported 00:07:01.159 Controller Attributes 00:07:01.159 128-bit Host Identifier: Not Supported 00:07:01.159 Non-Operational Permissive Mode: Not Supported 00:07:01.159 NVM Sets: Not Supported 00:07:01.159 Read Recovery Levels: Not Supported 00:07:01.159 Endurance Groups: Supported 00:07:01.159 Predictable Latency Mode: Not Supported 00:07:01.159 Traffic Based Keep ALive: Not Supported 00:07:01.159 Namespace Granularity: Not Supported 00:07:01.159 SQ Associations: Not Supported 00:07:01.159 UUID List: Not Supported 00:07:01.159 Multi-Domain Subsystem: Not Supported 00:07:01.159 Fixed Capacity Management: Not Supported 00:07:01.159 Variable Capacity Management: Not Supported 00:07:01.159 Delete Endurance Group: Not Supported 00:07:01.159 Delete NVM Set: Not Supported 00:07:01.159 Extended LBA Formats Supported: Supported 00:07:01.159 Flexible Data Placement Supported: Supported 00:07:01.159 00:07:01.159 Controller Memory Buffer Support 00:07:01.159 ================================ 00:07:01.159 Supported: No 00:07:01.160 00:07:01.160 Persistent Memory Region Support 00:07:01.160 ================================ 00:07:01.160 Supported: No 00:07:01.160 00:07:01.160 Admin Command Set Attributes 00:07:01.160 ============================ 00:07:01.160 Security Send/Receive: Not Supported 00:07:01.160 Format NVM: Supported 00:07:01.160 Firmware Activate/Download: Not Supported 00:07:01.160 Namespace Management: Supported 00:07:01.160 Device Self-Test: Not Supported 00:07:01.160 Directives: Supported 00:07:01.160 NVMe-MI: Not Supported 00:07:01.160 Virtualization Management: Not Supported 00:07:01.160 Doorbell Buffer Config: Supported 00:07:01.160 Get LBA Status Capability: Not Supported 00:07:01.160 Command & Feature Lockdown Capability: Not Supported 00:07:01.160 Abort Command Limit: 4 00:07:01.160 Async Event Request Limit: 4 00:07:01.160 Number of Firmware Slots: N/A 00:07:01.160 Firmware Slot 1 Read-Only: N/A 00:07:01.160 Firmware Activation Without Reset: N/A 00:07:01.160 Multiple Update Detection Support: N/A 00:07:01.160 Firmware Update Granularity: No Information Provided 00:07:01.160 Per-Namespace SMART Log: Yes 00:07:01.160 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.160 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:01.160 Command Effects Log Page: Supported 00:07:01.160 Get Log Page Extended Data: Supported 00:07:01.160 Telemetry Log Pages: Not Supported 00:07:01.160 Persistent Event Log Pages: Not Supported 00:07:01.160 Supported Log Pages Log Page: May Support 00:07:01.160 Commands Supported & Effects Log Page: Not Supported 00:07:01.160 Feature Identifiers & Effects Log Page:May Support 00:07:01.160 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.160 Data Area 4 for Telemetry Log: Not Supported 00:07:01.160 Error Log Page Entries Supported: 1 00:07:01.160 Keep Alive: Not Supported 00:07:01.160 00:07:01.160 NVM Command Set Attributes 00:07:01.160 ========================== 00:07:01.160 Submission Queue Entry Size 00:07:01.160 Max: 64 00:07:01.160 Min: 64 00:07:01.160 Completion Queue Entry Size 00:07:01.160 Max: 16 00:07:01.160 Min: 16 00:07:01.160 Number of Namespaces: 256 00:07:01.160 Compare Command: Supported 00:07:01.160 Write Uncorrectable Command: Not Supported 00:07:01.160 Dataset Management Command: Supported 00:07:01.160 Write Zeroes Command: Supported 00:07:01.160 Set Features Save Field: Supported 00:07:01.160 Reservations: Not Supported 00:07:01.160 Timestamp: Supported 00:07:01.160 Copy: Supported 00:07:01.160 Volatile Write Cache: Present 00:07:01.160 Atomic Write Unit (Normal): 1 00:07:01.160 Atomic Write Unit (PFail): 1 00:07:01.160 Atomic Compare & Write Unit: 1 00:07:01.160 Fused Compare & Write: Not Supported 00:07:01.160 Scatter-Gather List 00:07:01.160 SGL Command Set: Supported 00:07:01.160 SGL Keyed: Not Supported 00:07:01.160 SGL Bit Bucket Descriptor: Not Supported 00:07:01.160 SGL Metadata Pointer: Not Supported 00:07:01.160 Oversized SGL: Not Supported 00:07:01.160 SGL Metadata Address: Not Supported 00:07:01.160 SGL Offset: Not Supported 00:07:01.160 Transport SGL Data Block: Not Supported 00:07:01.160 Replay Protected Memory Block: Not Supported 00:07:01.160 00:07:01.160 Firmware Slot Information 00:07:01.160 ========================= 00:07:01.160 Active slot: 1 00:07:01.160 Slot 1 Firmware Revision: 1.0 00:07:01.160 00:07:01.160 00:07:01.160 Commands Supported and Effects 00:07:01.160 ============================== 00:07:01.160 Admin Commands 00:07:01.160 -------------- 00:07:01.160 Delete I/O Submission Queue (00h): Supported 00:07:01.160 Create I/O Submission Queue (01h): Supported 00:07:01.160 Get Log Page (02h): Supported 00:07:01.160 Delete I/O Completion Queue (04h): Supported 00:07:01.160 Create I/O Completion Queue (05h): Supported 00:07:01.160 Identify (06h): Supported 00:07:01.160 Abort (08h): Supported 00:07:01.160 Set Features (09h): Supported 00:07:01.160 Get Features (0Ah): Supported 00:07:01.160 Asynchronous Event Request (0Ch): Supported 00:07:01.160 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.160 Directive Send (19h): Supported 00:07:01.160 Directive Receive (1Ah): Supported 00:07:01.160 Virtualization Management (1Ch): Supported 00:07:01.160 Doorbell Buffer Config (7Ch): Supported 00:07:01.160 Format NVM (80h): Supported LBA-Change 00:07:01.160 I/O Commands 00:07:01.160 ------------ 00:07:01.160 Flush (00h): Supported LBA-Change 00:07:01.160 Write (01h): Supported LBA-Change 00:07:01.160 Read (02h): Supported 00:07:01.160 Compare (05h): Supported 00:07:01.160 Write Zeroes (08h): Supported LBA-Change 00:07:01.160 Dataset Management (09h): Supported LBA-Change 00:07:01.160 Unknown (0Ch): Supported 00:07:01.160 Unknown (12h): Supported 00:07:01.160 Copy (19h): Supported LBA-Change 00:07:01.160 Unknown (1Dh): Supported LBA-Change 00:07:01.160 00:07:01.160 Error Log 00:07:01.160 ========= 00:07:01.160 00:07:01.160 Arbitration 00:07:01.160 =========== 00:07:01.160 Arbitration Burst: no limit 00:07:01.160 00:07:01.160 Power Management 00:07:01.160 ================ 00:07:01.160 Number of Power States: 1 00:07:01.160 Current Power State: Power State #0 00:07:01.160 Power State #0: 00:07:01.160 Max Power: 25.00 W 00:07:01.160 Non-Operational State: Operational 00:07:01.160 Entry Latency: 16 microseconds 00:07:01.160 Exit Latency: 4 microseconds 00:07:01.160 Relative Read Throughput: 0 00:07:01.160 Relative Read Latency: 0 00:07:01.160 Relative Write Throughput: 0 00:07:01.160 Relative Write Latency: 0 00:07:01.160 Idle Power: Not Reported 00:07:01.160 Active Power: Not Reported 00:07:01.160 Non-Operational Permissive Mode: Not Supported 00:07:01.160 00:07:01.160 Health Information 00:07:01.160 ================== 00:07:01.160 Critical Warnings: 00:07:01.160 Available Spare Space: OK 00:07:01.160 Temperature: OK 00:07:01.160 Device Reliability: OK 00:07:01.160 Read Only: No 00:07:01.160 Volatile Memory Backup: OK 00:07:01.160 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.160 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.160 Available Spare: 0% 00:07:01.160 Available Spare Threshold: 0% 00:07:01.160 Life Percentage Used: [2024-09-28 01:19:56.887647] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 63200 terminated unexpected 00:07:01.160 [2024-09-28 01:19:56.888722] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 63200 terminated unexpected 00:07:01.160 [2024-09-28 01:19:56.889536] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 63200 terminated unexpected 00:07:01.160 0% 00:07:01.160 Data Units Read: 832 00:07:01.160 Data Units Written: 761 00:07:01.160 Host Read Commands: 37237 00:07:01.160 Host Write Commands: 36660 00:07:01.160 Controller Busy Time: 0 minutes 00:07:01.160 Power Cycles: 0 00:07:01.160 Power On Hours: 0 hours 00:07:01.160 Unsafe Shutdowns: 0 00:07:01.160 Unrecoverable Media Errors: 0 00:07:01.160 Lifetime Error Log Entries: 0 00:07:01.160 Warning Temperature Time: 0 minutes 00:07:01.160 Critical Temperature Time: 0 minutes 00:07:01.160 00:07:01.160 Number of Queues 00:07:01.160 ================ 00:07:01.160 Number of I/O Submission Queues: 64 00:07:01.160 Number of I/O Completion Queues: 64 00:07:01.160 00:07:01.160 ZNS Specific Controller Data 00:07:01.160 ============================ 00:07:01.160 Zone Append Size Limit: 0 00:07:01.160 00:07:01.160 00:07:01.160 Active Namespaces 00:07:01.160 ================= 00:07:01.160 Namespace ID:1 00:07:01.160 Error Recovery Timeout: Unlimited 00:07:01.160 Command Set Identifier: NVM (00h) 00:07:01.160 Deallocate: Supported 00:07:01.160 Deallocated/Unwritten Error: Supported 00:07:01.160 Deallocated Read Value: All 0x00 00:07:01.160 Deallocate in Write Zeroes: Not Supported 00:07:01.160 Deallocated Guard Field: 0xFFFF 00:07:01.160 Flush: Supported 00:07:01.160 Reservation: Not Supported 00:07:01.160 Namespace Sharing Capabilities: Multiple Controllers 00:07:01.160 Size (in LBAs): 262144 (1GiB) 00:07:01.160 Capacity (in LBAs): 262144 (1GiB) 00:07:01.160 Utilization (in LBAs): 262144 (1GiB) 00:07:01.160 Thin Provisioning: Not Supported 00:07:01.160 Per-NS Atomic Units: No 00:07:01.160 Maximum Single Source Range Length: 128 00:07:01.160 Maximum Copy Length: 128 00:07:01.161 Maximum Source Range Count: 128 00:07:01.161 NGUID/EUI64 Never Reused: No 00:07:01.161 Namespace Write Protected: No 00:07:01.161 Endurance group ID: 1 00:07:01.161 Number of LBA Formats: 8 00:07:01.161 Current LBA Format: LBA Format #04 00:07:01.161 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.161 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.161 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.161 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.161 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.161 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.161 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.161 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.161 00:07:01.161 Get Feature FDP: 00:07:01.161 ================ 00:07:01.161 Enabled: Yes 00:07:01.161 FDP configuration index: 0 00:07:01.161 00:07:01.161 FDP configurations log page 00:07:01.161 =========================== 00:07:01.161 Number of FDP configurations: 1 00:07:01.161 Version: 0 00:07:01.161 Size: 112 00:07:01.161 FDP Configuration Descriptor: 0 00:07:01.161 Descriptor Size: 96 00:07:01.161 Reclaim Group Identifier format: 2 00:07:01.161 FDP Volatile Write Cache: Not Present 00:07:01.161 FDP Configuration: Valid 00:07:01.161 Vendor Specific Size: 0 00:07:01.161 Number of Reclaim Groups: 2 00:07:01.161 Number of Recalim Unit Handles: 8 00:07:01.161 Max Placement Identifiers: 128 00:07:01.161 Number of Namespaces Suppprted: 256 00:07:01.161 Reclaim unit Nominal Size: 6000000 bytes 00:07:01.161 Estimated Reclaim Unit Time Limit: Not Reported 00:07:01.161 RUH Desc #000: RUH Type: Initially Isolated 00:07:01.161 RUH Desc #001: RUH Type: Initially Isolated 00:07:01.161 RUH Desc #002: RUH Type: Initially Isolated 00:07:01.161 RUH Desc #003: RUH Type: Initially Isolated 00:07:01.161 RUH Desc #004: RUH Type: Initially Isolated 00:07:01.161 RUH Desc #005: RUH Type: Initially Isolated 00:07:01.161 RUH Desc #006: RUH Type: Initially Isolated 00:07:01.161 RUH Desc #007: RUH Type: Initially Isolated 00:07:01.161 00:07:01.161 FDP reclaim unit handle usage log page 00:07:01.161 ====================================== 00:07:01.161 Number of Reclaim Unit Handles: 8 00:07:01.161 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:01.161 RUH Usage Desc #001: RUH Attributes: Unused 00:07:01.161 RUH Usage Desc #002: RUH Attributes: Unused 00:07:01.161 RUH Usage Desc #003: RUH Attributes: Unused 00:07:01.161 RUH Usage Desc #004: RUH Attributes: Unused 00:07:01.161 RUH Usage Desc #005: RUH Attributes: Unused 00:07:01.161 RUH Usage Desc #006: RUH Attributes: Unused 00:07:01.161 RUH Usage Desc #007: RUH Attributes: Unused 00:07:01.161 00:07:01.161 FDP statistics log page 00:07:01.161 ======================= 00:07:01.161 Host bytes with metadata written: 495558656 00:07:01.161 Medi[2024-09-28 01:19:56.891955] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 63200 terminated unexpected 00:07:01.161 a bytes with metadata written: 495611904 00:07:01.161 Media bytes erased: 0 00:07:01.161 00:07:01.161 FDP events log page 00:07:01.161 =================== 00:07:01.161 Number of FDP events: 0 00:07:01.161 00:07:01.161 NVM Specific Namespace Data 00:07:01.161 =========================== 00:07:01.161 Logical Block Storage Tag Mask: 0 00:07:01.161 Protection Information Capabilities: 00:07:01.161 16b Guard Protection Information Storage Tag Support: No 00:07:01.161 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.161 Storage Tag Check Read Support: No 00:07:01.161 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.161 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.161 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.161 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.161 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.161 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.161 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.161 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.161 ===================================================== 00:07:01.161 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:01.161 ===================================================== 00:07:01.161 Controller Capabilities/Features 00:07:01.161 ================================ 00:07:01.161 Vendor ID: 1b36 00:07:01.161 Subsystem Vendor ID: 1af4 00:07:01.161 Serial Number: 12342 00:07:01.161 Model Number: QEMU NVMe Ctrl 00:07:01.161 Firmware Version: 8.0.0 00:07:01.161 Recommended Arb Burst: 6 00:07:01.161 IEEE OUI Identifier: 00 54 52 00:07:01.161 Multi-path I/O 00:07:01.161 May have multiple subsystem ports: No 00:07:01.161 May have multiple controllers: No 00:07:01.161 Associated with SR-IOV VF: No 00:07:01.161 Max Data Transfer Size: 524288 00:07:01.161 Max Number of Namespaces: 256 00:07:01.161 Max Number of I/O Queues: 64 00:07:01.161 NVMe Specification Version (VS): 1.4 00:07:01.161 NVMe Specification Version (Identify): 1.4 00:07:01.161 Maximum Queue Entries: 2048 00:07:01.161 Contiguous Queues Required: Yes 00:07:01.161 Arbitration Mechanisms Supported 00:07:01.161 Weighted Round Robin: Not Supported 00:07:01.161 Vendor Specific: Not Supported 00:07:01.161 Reset Timeout: 7500 ms 00:07:01.161 Doorbell Stride: 4 bytes 00:07:01.161 NVM Subsystem Reset: Not Supported 00:07:01.161 Command Sets Supported 00:07:01.161 NVM Command Set: Supported 00:07:01.161 Boot Partition: Not Supported 00:07:01.161 Memory Page Size Minimum: 4096 bytes 00:07:01.161 Memory Page Size Maximum: 65536 bytes 00:07:01.161 Persistent Memory Region: Not Supported 00:07:01.161 Optional Asynchronous Events Supported 00:07:01.161 Namespace Attribute Notices: Supported 00:07:01.161 Firmware Activation Notices: Not Supported 00:07:01.161 ANA Change Notices: Not Supported 00:07:01.161 PLE Aggregate Log Change Notices: Not Supported 00:07:01.161 LBA Status Info Alert Notices: Not Supported 00:07:01.161 EGE Aggregate Log Change Notices: Not Supported 00:07:01.161 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.161 Zone Descriptor Change Notices: Not Supported 00:07:01.161 Discovery Log Change Notices: Not Supported 00:07:01.161 Controller Attributes 00:07:01.161 128-bit Host Identifier: Not Supported 00:07:01.161 Non-Operational Permissive Mode: Not Supported 00:07:01.161 NVM Sets: Not Supported 00:07:01.161 Read Recovery Levels: Not Supported 00:07:01.161 Endurance Groups: Not Supported 00:07:01.161 Predictable Latency Mode: Not Supported 00:07:01.161 Traffic Based Keep ALive: Not Supported 00:07:01.161 Namespace Granularity: Not Supported 00:07:01.161 SQ Associations: Not Supported 00:07:01.161 UUID List: Not Supported 00:07:01.161 Multi-Domain Subsystem: Not Supported 00:07:01.161 Fixed Capacity Management: Not Supported 00:07:01.161 Variable Capacity Management: Not Supported 00:07:01.161 Delete Endurance Group: Not Supported 00:07:01.162 Delete NVM Set: Not Supported 00:07:01.162 Extended LBA Formats Supported: Supported 00:07:01.162 Flexible Data Placement Supported: Not Supported 00:07:01.162 00:07:01.162 Controller Memory Buffer Support 00:07:01.162 ================================ 00:07:01.162 Supported: No 00:07:01.162 00:07:01.162 Persistent Memory Region Support 00:07:01.162 ================================ 00:07:01.162 Supported: No 00:07:01.162 00:07:01.162 Admin Command Set Attributes 00:07:01.162 ============================ 00:07:01.162 Security Send/Receive: Not Supported 00:07:01.162 Format NVM: Supported 00:07:01.162 Firmware Activate/Download: Not Supported 00:07:01.162 Namespace Management: Supported 00:07:01.162 Device Self-Test: Not Supported 00:07:01.162 Directives: Supported 00:07:01.162 NVMe-MI: Not Supported 00:07:01.162 Virtualization Management: Not Supported 00:07:01.162 Doorbell Buffer Config: Supported 00:07:01.162 Get LBA Status Capability: Not Supported 00:07:01.162 Command & Feature Lockdown Capability: Not Supported 00:07:01.162 Abort Command Limit: 4 00:07:01.162 Async Event Request Limit: 4 00:07:01.162 Number of Firmware Slots: N/A 00:07:01.162 Firmware Slot 1 Read-Only: N/A 00:07:01.162 Firmware Activation Without Reset: N/A 00:07:01.162 Multiple Update Detection Support: N/A 00:07:01.162 Firmware Update Granularity: No Information Provided 00:07:01.162 Per-Namespace SMART Log: Yes 00:07:01.162 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.162 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:01.162 Command Effects Log Page: Supported 00:07:01.162 Get Log Page Extended Data: Supported 00:07:01.162 Telemetry Log Pages: Not Supported 00:07:01.162 Persistent Event Log Pages: Not Supported 00:07:01.162 Supported Log Pages Log Page: May Support 00:07:01.162 Commands Supported & Effects Log Page: Not Supported 00:07:01.162 Feature Identifiers & Effects Log Page:May Support 00:07:01.162 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.162 Data Area 4 for Telemetry Log: Not Supported 00:07:01.162 Error Log Page Entries Supported: 1 00:07:01.162 Keep Alive: Not Supported 00:07:01.162 00:07:01.162 NVM Command Set Attributes 00:07:01.162 ========================== 00:07:01.162 Submission Queue Entry Size 00:07:01.162 Max: 64 00:07:01.162 Min: 64 00:07:01.162 Completion Queue Entry Size 00:07:01.162 Max: 16 00:07:01.162 Min: 16 00:07:01.162 Number of Namespaces: 256 00:07:01.162 Compare Command: Supported 00:07:01.162 Write Uncorrectable Command: Not Supported 00:07:01.162 Dataset Management Command: Supported 00:07:01.162 Write Zeroes Command: Supported 00:07:01.162 Set Features Save Field: Supported 00:07:01.162 Reservations: Not Supported 00:07:01.162 Timestamp: Supported 00:07:01.162 Copy: Supported 00:07:01.162 Volatile Write Cache: Present 00:07:01.162 Atomic Write Unit (Normal): 1 00:07:01.162 Atomic Write Unit (PFail): 1 00:07:01.162 Atomic Compare & Write Unit: 1 00:07:01.162 Fused Compare & Write: Not Supported 00:07:01.162 Scatter-Gather List 00:07:01.162 SGL Command Set: Supported 00:07:01.162 SGL Keyed: Not Supported 00:07:01.162 SGL Bit Bucket Descriptor: Not Supported 00:07:01.162 SGL Metadata Pointer: Not Supported 00:07:01.162 Oversized SGL: Not Supported 00:07:01.162 SGL Metadata Address: Not Supported 00:07:01.162 SGL Offset: Not Supported 00:07:01.162 Transport SGL Data Block: Not Supported 00:07:01.162 Replay Protected Memory Block: Not Supported 00:07:01.162 00:07:01.162 Firmware Slot Information 00:07:01.162 ========================= 00:07:01.162 Active slot: 1 00:07:01.162 Slot 1 Firmware Revision: 1.0 00:07:01.162 00:07:01.162 00:07:01.162 Commands Supported and Effects 00:07:01.162 ============================== 00:07:01.162 Admin Commands 00:07:01.162 -------------- 00:07:01.162 Delete I/O Submission Queue (00h): Supported 00:07:01.162 Create I/O Submission Queue (01h): Supported 00:07:01.162 Get Log Page (02h): Supported 00:07:01.162 Delete I/O Completion Queue (04h): Supported 00:07:01.162 Create I/O Completion Queue (05h): Supported 00:07:01.162 Identify (06h): Supported 00:07:01.162 Abort (08h): Supported 00:07:01.162 Set Features (09h): Supported 00:07:01.162 Get Features (0Ah): Supported 00:07:01.162 Asynchronous Event Request (0Ch): Supported 00:07:01.162 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.162 Directive Send (19h): Supported 00:07:01.162 Directive Receive (1Ah): Supported 00:07:01.162 Virtualization Management (1Ch): Supported 00:07:01.162 Doorbell Buffer Config (7Ch): Supported 00:07:01.162 Format NVM (80h): Supported LBA-Change 00:07:01.162 I/O Commands 00:07:01.162 ------------ 00:07:01.162 Flush (00h): Supported LBA-Change 00:07:01.162 Write (01h): Supported LBA-Change 00:07:01.162 Read (02h): Supported 00:07:01.162 Compare (05h): Supported 00:07:01.162 Write Zeroes (08h): Supported LBA-Change 00:07:01.162 Dataset Management (09h): Supported LBA-Change 00:07:01.162 Unknown (0Ch): Supported 00:07:01.162 Unknown (12h): Supported 00:07:01.162 Copy (19h): Supported LBA-Change 00:07:01.162 Unknown (1Dh): Supported LBA-Change 00:07:01.162 00:07:01.162 Error Log 00:07:01.162 ========= 00:07:01.162 00:07:01.162 Arbitration 00:07:01.162 =========== 00:07:01.162 Arbitration Burst: no limit 00:07:01.162 00:07:01.162 Power Management 00:07:01.162 ================ 00:07:01.162 Number of Power States: 1 00:07:01.162 Current Power State: Power State #0 00:07:01.162 Power State #0: 00:07:01.162 Max Power: 25.00 W 00:07:01.162 Non-Operational State: Operational 00:07:01.162 Entry Latency: 16 microseconds 00:07:01.162 Exit Latency: 4 microseconds 00:07:01.162 Relative Read Throughput: 0 00:07:01.162 Relative Read Latency: 0 00:07:01.162 Relative Write Throughput: 0 00:07:01.162 Relative Write Latency: 0 00:07:01.162 Idle Power: Not Reported 00:07:01.162 Active Power: Not Reported 00:07:01.162 Non-Operational Permissive Mode: Not Supported 00:07:01.162 00:07:01.162 Health Information 00:07:01.162 ================== 00:07:01.162 Critical Warnings: 00:07:01.162 Available Spare Space: OK 00:07:01.162 Temperature: OK 00:07:01.162 Device Reliability: OK 00:07:01.162 Read Only: No 00:07:01.162 Volatile Memory Backup: OK 00:07:01.162 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.162 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.162 Available Spare: 0% 00:07:01.162 Available Spare Threshold: 0% 00:07:01.162 Life Percentage Used: 0% 00:07:01.162 Data Units Read: 2153 00:07:01.162 Data Units Written: 1940 00:07:01.162 Host Read Commands: 108980 00:07:01.162 Host Write Commands: 107252 00:07:01.162 Controller Busy Time: 0 minutes 00:07:01.162 Power Cycles: 0 00:07:01.162 Power On Hours: 0 hours 00:07:01.162 Unsafe Shutdowns: 0 00:07:01.162 Unrecoverable Media Errors: 0 00:07:01.162 Lifetime Error Log Entries: 0 00:07:01.162 Warning Temperature Time: 0 minutes 00:07:01.162 Critical Temperature Time: 0 minutes 00:07:01.162 00:07:01.162 Number of Queues 00:07:01.162 ================ 00:07:01.162 Number of I/O Submission Queues: 64 00:07:01.162 Number of I/O Completion Queues: 64 00:07:01.162 00:07:01.162 ZNS Specific Controller Data 00:07:01.162 ============================ 00:07:01.162 Zone Append Size Limit: 0 00:07:01.162 00:07:01.162 00:07:01.162 Active Namespaces 00:07:01.162 ================= 00:07:01.162 Namespace ID:1 00:07:01.162 Error Recovery Timeout: Unlimited 00:07:01.162 Command Set Identifier: NVM (00h) 00:07:01.162 Deallocate: Supported 00:07:01.162 Deallocated/Unwritten Error: Supported 00:07:01.162 Deallocated Read Value: All 0x00 00:07:01.162 Deallocate in Write Zeroes: Not Supported 00:07:01.162 Deallocated Guard Field: 0xFFFF 00:07:01.162 Flush: Supported 00:07:01.162 Reservation: Not Supported 00:07:01.162 Namespace Sharing Capabilities: Private 00:07:01.162 Size (in LBAs): 1048576 (4GiB) 00:07:01.162 Capacity (in LBAs): 1048576 (4GiB) 00:07:01.162 Utilization (in LBAs): 1048576 (4GiB) 00:07:01.162 Thin Provisioning: Not Supported 00:07:01.162 Per-NS Atomic Units: No 00:07:01.162 Maximum Single Source Range Length: 128 00:07:01.162 Maximum Copy Length: 128 00:07:01.162 Maximum Source Range Count: 128 00:07:01.162 NGUID/EUI64 Never Reused: No 00:07:01.162 Namespace Write Protected: No 00:07:01.162 Number of LBA Formats: 8 00:07:01.162 Current LBA Format: LBA Format #04 00:07:01.162 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.162 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.162 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.162 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.162 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.162 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.162 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.163 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.163 00:07:01.163 NVM Specific Namespace Data 00:07:01.163 =========================== 00:07:01.163 Logical Block Storage Tag Mask: 0 00:07:01.163 Protection Information Capabilities: 00:07:01.163 16b Guard Protection Information Storage Tag Support: No 00:07:01.163 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.163 Storage Tag Check Read Support: No 00:07:01.163 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Namespace ID:2 00:07:01.163 Error Recovery Timeout: Unlimited 00:07:01.163 Command Set Identifier: NVM (00h) 00:07:01.163 Deallocate: Supported 00:07:01.163 Deallocated/Unwritten Error: Supported 00:07:01.163 Deallocated Read Value: All 0x00 00:07:01.163 Deallocate in Write Zeroes: Not Supported 00:07:01.163 Deallocated Guard Field: 0xFFFF 00:07:01.163 Flush: Supported 00:07:01.163 Reservation: Not Supported 00:07:01.163 Namespace Sharing Capabilities: Private 00:07:01.163 Size (in LBAs): 1048576 (4GiB) 00:07:01.163 Capacity (in LBAs): 1048576 (4GiB) 00:07:01.163 Utilization (in LBAs): 1048576 (4GiB) 00:07:01.163 Thin Provisioning: Not Supported 00:07:01.163 Per-NS Atomic Units: No 00:07:01.163 Maximum Single Source Range Length: 128 00:07:01.163 Maximum Copy Length: 128 00:07:01.163 Maximum Source Range Count: 128 00:07:01.163 NGUID/EUI64 Never Reused: No 00:07:01.163 Namespace Write Protected: No 00:07:01.163 Number of LBA Formats: 8 00:07:01.163 Current LBA Format: LBA Format #04 00:07:01.163 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.163 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.163 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.163 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.163 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.163 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.163 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.163 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.163 00:07:01.163 NVM Specific Namespace Data 00:07:01.163 =========================== 00:07:01.163 Logical Block Storage Tag Mask: 0 00:07:01.163 Protection Information Capabilities: 00:07:01.163 16b Guard Protection Information Storage Tag Support: No 00:07:01.163 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.163 Storage Tag Check Read Support: No 00:07:01.163 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Namespace ID:3 00:07:01.163 Error Recovery Timeout: Unlimited 00:07:01.163 Command Set Identifier: NVM (00h) 00:07:01.163 Deallocate: Supported 00:07:01.163 Deallocated/Unwritten Error: Supported 00:07:01.163 Deallocated Read Value: All 0x00 00:07:01.163 Deallocate in Write Zeroes: Not Supported 00:07:01.163 Deallocated Guard Field: 0xFFFF 00:07:01.163 Flush: Supported 00:07:01.163 Reservation: Not Supported 00:07:01.163 Namespace Sharing Capabilities: Private 00:07:01.163 Size (in LBAs): 1048576 (4GiB) 00:07:01.163 Capacity (in LBAs): 1048576 (4GiB) 00:07:01.163 Utilization (in LBAs): 1048576 (4GiB) 00:07:01.163 Thin Provisioning: Not Supported 00:07:01.163 Per-NS Atomic Units: No 00:07:01.163 Maximum Single Source Range Length: 128 00:07:01.163 Maximum Copy Length: 128 00:07:01.163 Maximum Source Range Count: 128 00:07:01.163 NGUID/EUI64 Never Reused: No 00:07:01.163 Namespace Write Protected: No 00:07:01.163 Number of LBA Formats: 8 00:07:01.163 Current LBA Format: LBA Format #04 00:07:01.163 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.163 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.163 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.163 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.163 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.163 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.163 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.163 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.163 00:07:01.163 NVM Specific Namespace Data 00:07:01.163 =========================== 00:07:01.163 Logical Block Storage Tag Mask: 0 00:07:01.163 Protection Information Capabilities: 00:07:01.163 16b Guard Protection Information Storage Tag Support: No 00:07:01.163 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.163 Storage Tag Check Read Support: No 00:07:01.163 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.163 01:19:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:01.163 01:19:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:01.436 ===================================================== 00:07:01.436 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:01.436 ===================================================== 00:07:01.436 Controller Capabilities/Features 00:07:01.436 ================================ 00:07:01.436 Vendor ID: 1b36 00:07:01.436 Subsystem Vendor ID: 1af4 00:07:01.436 Serial Number: 12340 00:07:01.436 Model Number: QEMU NVMe Ctrl 00:07:01.436 Firmware Version: 8.0.0 00:07:01.436 Recommended Arb Burst: 6 00:07:01.436 IEEE OUI Identifier: 00 54 52 00:07:01.436 Multi-path I/O 00:07:01.436 May have multiple subsystem ports: No 00:07:01.436 May have multiple controllers: No 00:07:01.436 Associated with SR-IOV VF: No 00:07:01.436 Max Data Transfer Size: 524288 00:07:01.436 Max Number of Namespaces: 256 00:07:01.436 Max Number of I/O Queues: 64 00:07:01.436 NVMe Specification Version (VS): 1.4 00:07:01.436 NVMe Specification Version (Identify): 1.4 00:07:01.436 Maximum Queue Entries: 2048 00:07:01.436 Contiguous Queues Required: Yes 00:07:01.436 Arbitration Mechanisms Supported 00:07:01.436 Weighted Round Robin: Not Supported 00:07:01.436 Vendor Specific: Not Supported 00:07:01.436 Reset Timeout: 7500 ms 00:07:01.436 Doorbell Stride: 4 bytes 00:07:01.436 NVM Subsystem Reset: Not Supported 00:07:01.436 Command Sets Supported 00:07:01.436 NVM Command Set: Supported 00:07:01.436 Boot Partition: Not Supported 00:07:01.436 Memory Page Size Minimum: 4096 bytes 00:07:01.436 Memory Page Size Maximum: 65536 bytes 00:07:01.436 Persistent Memory Region: Not Supported 00:07:01.436 Optional Asynchronous Events Supported 00:07:01.436 Namespace Attribute Notices: Supported 00:07:01.436 Firmware Activation Notices: Not Supported 00:07:01.436 ANA Change Notices: Not Supported 00:07:01.436 PLE Aggregate Log Change Notices: Not Supported 00:07:01.436 LBA Status Info Alert Notices: Not Supported 00:07:01.436 EGE Aggregate Log Change Notices: Not Supported 00:07:01.436 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.436 Zone Descriptor Change Notices: Not Supported 00:07:01.436 Discovery Log Change Notices: Not Supported 00:07:01.436 Controller Attributes 00:07:01.436 128-bit Host Identifier: Not Supported 00:07:01.437 Non-Operational Permissive Mode: Not Supported 00:07:01.437 NVM Sets: Not Supported 00:07:01.437 Read Recovery Levels: Not Supported 00:07:01.437 Endurance Groups: Not Supported 00:07:01.437 Predictable Latency Mode: Not Supported 00:07:01.437 Traffic Based Keep ALive: Not Supported 00:07:01.437 Namespace Granularity: Not Supported 00:07:01.437 SQ Associations: Not Supported 00:07:01.437 UUID List: Not Supported 00:07:01.437 Multi-Domain Subsystem: Not Supported 00:07:01.437 Fixed Capacity Management: Not Supported 00:07:01.437 Variable Capacity Management: Not Supported 00:07:01.437 Delete Endurance Group: Not Supported 00:07:01.437 Delete NVM Set: Not Supported 00:07:01.437 Extended LBA Formats Supported: Supported 00:07:01.437 Flexible Data Placement Supported: Not Supported 00:07:01.437 00:07:01.437 Controller Memory Buffer Support 00:07:01.437 ================================ 00:07:01.437 Supported: No 00:07:01.437 00:07:01.437 Persistent Memory Region Support 00:07:01.437 ================================ 00:07:01.437 Supported: No 00:07:01.437 00:07:01.437 Admin Command Set Attributes 00:07:01.437 ============================ 00:07:01.437 Security Send/Receive: Not Supported 00:07:01.437 Format NVM: Supported 00:07:01.437 Firmware Activate/Download: Not Supported 00:07:01.437 Namespace Management: Supported 00:07:01.437 Device Self-Test: Not Supported 00:07:01.437 Directives: Supported 00:07:01.437 NVMe-MI: Not Supported 00:07:01.437 Virtualization Management: Not Supported 00:07:01.437 Doorbell Buffer Config: Supported 00:07:01.437 Get LBA Status Capability: Not Supported 00:07:01.437 Command & Feature Lockdown Capability: Not Supported 00:07:01.437 Abort Command Limit: 4 00:07:01.437 Async Event Request Limit: 4 00:07:01.437 Number of Firmware Slots: N/A 00:07:01.437 Firmware Slot 1 Read-Only: N/A 00:07:01.437 Firmware Activation Without Reset: N/A 00:07:01.437 Multiple Update Detection Support: N/A 00:07:01.437 Firmware Update Granularity: No Information Provided 00:07:01.437 Per-Namespace SMART Log: Yes 00:07:01.437 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.437 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:01.437 Command Effects Log Page: Supported 00:07:01.437 Get Log Page Extended Data: Supported 00:07:01.437 Telemetry Log Pages: Not Supported 00:07:01.437 Persistent Event Log Pages: Not Supported 00:07:01.437 Supported Log Pages Log Page: May Support 00:07:01.437 Commands Supported & Effects Log Page: Not Supported 00:07:01.437 Feature Identifiers & Effects Log Page:May Support 00:07:01.437 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.437 Data Area 4 for Telemetry Log: Not Supported 00:07:01.437 Error Log Page Entries Supported: 1 00:07:01.437 Keep Alive: Not Supported 00:07:01.437 00:07:01.437 NVM Command Set Attributes 00:07:01.437 ========================== 00:07:01.437 Submission Queue Entry Size 00:07:01.437 Max: 64 00:07:01.437 Min: 64 00:07:01.437 Completion Queue Entry Size 00:07:01.437 Max: 16 00:07:01.437 Min: 16 00:07:01.437 Number of Namespaces: 256 00:07:01.437 Compare Command: Supported 00:07:01.437 Write Uncorrectable Command: Not Supported 00:07:01.437 Dataset Management Command: Supported 00:07:01.437 Write Zeroes Command: Supported 00:07:01.437 Set Features Save Field: Supported 00:07:01.437 Reservations: Not Supported 00:07:01.437 Timestamp: Supported 00:07:01.437 Copy: Supported 00:07:01.437 Volatile Write Cache: Present 00:07:01.437 Atomic Write Unit (Normal): 1 00:07:01.437 Atomic Write Unit (PFail): 1 00:07:01.437 Atomic Compare & Write Unit: 1 00:07:01.437 Fused Compare & Write: Not Supported 00:07:01.437 Scatter-Gather List 00:07:01.437 SGL Command Set: Supported 00:07:01.437 SGL Keyed: Not Supported 00:07:01.437 SGL Bit Bucket Descriptor: Not Supported 00:07:01.437 SGL Metadata Pointer: Not Supported 00:07:01.437 Oversized SGL: Not Supported 00:07:01.437 SGL Metadata Address: Not Supported 00:07:01.437 SGL Offset: Not Supported 00:07:01.437 Transport SGL Data Block: Not Supported 00:07:01.437 Replay Protected Memory Block: Not Supported 00:07:01.437 00:07:01.437 Firmware Slot Information 00:07:01.437 ========================= 00:07:01.437 Active slot: 1 00:07:01.437 Slot 1 Firmware Revision: 1.0 00:07:01.437 00:07:01.437 00:07:01.437 Commands Supported and Effects 00:07:01.437 ============================== 00:07:01.437 Admin Commands 00:07:01.437 -------------- 00:07:01.437 Delete I/O Submission Queue (00h): Supported 00:07:01.437 Create I/O Submission Queue (01h): Supported 00:07:01.437 Get Log Page (02h): Supported 00:07:01.437 Delete I/O Completion Queue (04h): Supported 00:07:01.437 Create I/O Completion Queue (05h): Supported 00:07:01.437 Identify (06h): Supported 00:07:01.437 Abort (08h): Supported 00:07:01.437 Set Features (09h): Supported 00:07:01.437 Get Features (0Ah): Supported 00:07:01.437 Asynchronous Event Request (0Ch): Supported 00:07:01.437 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.437 Directive Send (19h): Supported 00:07:01.437 Directive Receive (1Ah): Supported 00:07:01.437 Virtualization Management (1Ch): Supported 00:07:01.437 Doorbell Buffer Config (7Ch): Supported 00:07:01.437 Format NVM (80h): Supported LBA-Change 00:07:01.437 I/O Commands 00:07:01.437 ------------ 00:07:01.437 Flush (00h): Supported LBA-Change 00:07:01.437 Write (01h): Supported LBA-Change 00:07:01.437 Read (02h): Supported 00:07:01.437 Compare (05h): Supported 00:07:01.437 Write Zeroes (08h): Supported LBA-Change 00:07:01.437 Dataset Management (09h): Supported LBA-Change 00:07:01.437 Unknown (0Ch): Supported 00:07:01.437 Unknown (12h): Supported 00:07:01.437 Copy (19h): Supported LBA-Change 00:07:01.437 Unknown (1Dh): Supported LBA-Change 00:07:01.437 00:07:01.437 Error Log 00:07:01.437 ========= 00:07:01.437 00:07:01.437 Arbitration 00:07:01.437 =========== 00:07:01.437 Arbitration Burst: no limit 00:07:01.437 00:07:01.437 Power Management 00:07:01.437 ================ 00:07:01.437 Number of Power States: 1 00:07:01.437 Current Power State: Power State #0 00:07:01.437 Power State #0: 00:07:01.437 Max Power: 25.00 W 00:07:01.437 Non-Operational State: Operational 00:07:01.437 Entry Latency: 16 microseconds 00:07:01.437 Exit Latency: 4 microseconds 00:07:01.437 Relative Read Throughput: 0 00:07:01.437 Relative Read Latency: 0 00:07:01.437 Relative Write Throughput: 0 00:07:01.437 Relative Write Latency: 0 00:07:01.437 Idle Power: Not Reported 00:07:01.437 Active Power: Not Reported 00:07:01.437 Non-Operational Permissive Mode: Not Supported 00:07:01.437 00:07:01.437 Health Information 00:07:01.437 ================== 00:07:01.437 Critical Warnings: 00:07:01.437 Available Spare Space: OK 00:07:01.437 Temperature: OK 00:07:01.437 Device Reliability: OK 00:07:01.437 Read Only: No 00:07:01.437 Volatile Memory Backup: OK 00:07:01.437 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.437 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.437 Available Spare: 0% 00:07:01.437 Available Spare Threshold: 0% 00:07:01.437 Life Percentage Used: 0% 00:07:01.437 Data Units Read: 670 00:07:01.437 Data Units Written: 598 00:07:01.437 Host Read Commands: 35732 00:07:01.437 Host Write Commands: 35518 00:07:01.437 Controller Busy Time: 0 minutes 00:07:01.437 Power Cycles: 0 00:07:01.437 Power On Hours: 0 hours 00:07:01.437 Unsafe Shutdowns: 0 00:07:01.437 Unrecoverable Media Errors: 0 00:07:01.437 Lifetime Error Log Entries: 0 00:07:01.437 Warning Temperature Time: 0 minutes 00:07:01.437 Critical Temperature Time: 0 minutes 00:07:01.437 00:07:01.437 Number of Queues 00:07:01.437 ================ 00:07:01.437 Number of I/O Submission Queues: 64 00:07:01.437 Number of I/O Completion Queues: 64 00:07:01.437 00:07:01.437 ZNS Specific Controller Data 00:07:01.437 ============================ 00:07:01.437 Zone Append Size Limit: 0 00:07:01.437 00:07:01.437 00:07:01.437 Active Namespaces 00:07:01.437 ================= 00:07:01.437 Namespace ID:1 00:07:01.437 Error Recovery Timeout: Unlimited 00:07:01.437 Command Set Identifier: NVM (00h) 00:07:01.437 Deallocate: Supported 00:07:01.437 Deallocated/Unwritten Error: Supported 00:07:01.437 Deallocated Read Value: All 0x00 00:07:01.437 Deallocate in Write Zeroes: Not Supported 00:07:01.437 Deallocated Guard Field: 0xFFFF 00:07:01.437 Flush: Supported 00:07:01.437 Reservation: Not Supported 00:07:01.437 Metadata Transferred as: Separate Metadata Buffer 00:07:01.437 Namespace Sharing Capabilities: Private 00:07:01.437 Size (in LBAs): 1548666 (5GiB) 00:07:01.437 Capacity (in LBAs): 1548666 (5GiB) 00:07:01.437 Utilization (in LBAs): 1548666 (5GiB) 00:07:01.437 Thin Provisioning: Not Supported 00:07:01.437 Per-NS Atomic Units: No 00:07:01.437 Maximum Single Source Range Length: 128 00:07:01.437 Maximum Copy Length: 128 00:07:01.437 Maximum Source Range Count: 128 00:07:01.438 NGUID/EUI64 Never Reused: No 00:07:01.438 Namespace Write Protected: No 00:07:01.438 Number of LBA Formats: 8 00:07:01.438 Current LBA Format: LBA Format #07 00:07:01.438 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.438 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.438 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.438 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.438 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.438 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.438 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.438 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.438 00:07:01.438 NVM Specific Namespace Data 00:07:01.438 =========================== 00:07:01.438 Logical Block Storage Tag Mask: 0 00:07:01.438 Protection Information Capabilities: 00:07:01.438 16b Guard Protection Information Storage Tag Support: No 00:07:01.438 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.438 Storage Tag Check Read Support: No 00:07:01.438 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.438 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.438 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.438 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.438 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.438 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.438 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.438 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.438 01:19:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:01.438 01:19:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:01.438 ===================================================== 00:07:01.438 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:01.438 ===================================================== 00:07:01.438 Controller Capabilities/Features 00:07:01.438 ================================ 00:07:01.438 Vendor ID: 1b36 00:07:01.438 Subsystem Vendor ID: 1af4 00:07:01.438 Serial Number: 12341 00:07:01.438 Model Number: QEMU NVMe Ctrl 00:07:01.438 Firmware Version: 8.0.0 00:07:01.438 Recommended Arb Burst: 6 00:07:01.438 IEEE OUI Identifier: 00 54 52 00:07:01.438 Multi-path I/O 00:07:01.438 May have multiple subsystem ports: No 00:07:01.438 May have multiple controllers: No 00:07:01.438 Associated with SR-IOV VF: No 00:07:01.438 Max Data Transfer Size: 524288 00:07:01.438 Max Number of Namespaces: 256 00:07:01.438 Max Number of I/O Queues: 64 00:07:01.438 NVMe Specification Version (VS): 1.4 00:07:01.438 NVMe Specification Version (Identify): 1.4 00:07:01.438 Maximum Queue Entries: 2048 00:07:01.438 Contiguous Queues Required: Yes 00:07:01.438 Arbitration Mechanisms Supported 00:07:01.438 Weighted Round Robin: Not Supported 00:07:01.438 Vendor Specific: Not Supported 00:07:01.438 Reset Timeout: 7500 ms 00:07:01.438 Doorbell Stride: 4 bytes 00:07:01.438 NVM Subsystem Reset: Not Supported 00:07:01.438 Command Sets Supported 00:07:01.438 NVM Command Set: Supported 00:07:01.438 Boot Partition: Not Supported 00:07:01.438 Memory Page Size Minimum: 4096 bytes 00:07:01.438 Memory Page Size Maximum: 65536 bytes 00:07:01.438 Persistent Memory Region: Not Supported 00:07:01.438 Optional Asynchronous Events Supported 00:07:01.438 Namespace Attribute Notices: Supported 00:07:01.438 Firmware Activation Notices: Not Supported 00:07:01.438 ANA Change Notices: Not Supported 00:07:01.438 PLE Aggregate Log Change Notices: Not Supported 00:07:01.438 LBA Status Info Alert Notices: Not Supported 00:07:01.438 EGE Aggregate Log Change Notices: Not Supported 00:07:01.438 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.438 Zone Descriptor Change Notices: Not Supported 00:07:01.438 Discovery Log Change Notices: Not Supported 00:07:01.438 Controller Attributes 00:07:01.438 128-bit Host Identifier: Not Supported 00:07:01.438 Non-Operational Permissive Mode: Not Supported 00:07:01.438 NVM Sets: Not Supported 00:07:01.438 Read Recovery Levels: Not Supported 00:07:01.438 Endurance Groups: Not Supported 00:07:01.438 Predictable Latency Mode: Not Supported 00:07:01.438 Traffic Based Keep ALive: Not Supported 00:07:01.438 Namespace Granularity: Not Supported 00:07:01.438 SQ Associations: Not Supported 00:07:01.438 UUID List: Not Supported 00:07:01.438 Multi-Domain Subsystem: Not Supported 00:07:01.438 Fixed Capacity Management: Not Supported 00:07:01.438 Variable Capacity Management: Not Supported 00:07:01.438 Delete Endurance Group: Not Supported 00:07:01.438 Delete NVM Set: Not Supported 00:07:01.438 Extended LBA Formats Supported: Supported 00:07:01.438 Flexible Data Placement Supported: Not Supported 00:07:01.438 00:07:01.438 Controller Memory Buffer Support 00:07:01.438 ================================ 00:07:01.438 Supported: No 00:07:01.438 00:07:01.438 Persistent Memory Region Support 00:07:01.438 ================================ 00:07:01.438 Supported: No 00:07:01.438 00:07:01.438 Admin Command Set Attributes 00:07:01.438 ============================ 00:07:01.438 Security Send/Receive: Not Supported 00:07:01.438 Format NVM: Supported 00:07:01.438 Firmware Activate/Download: Not Supported 00:07:01.438 Namespace Management: Supported 00:07:01.438 Device Self-Test: Not Supported 00:07:01.438 Directives: Supported 00:07:01.438 NVMe-MI: Not Supported 00:07:01.438 Virtualization Management: Not Supported 00:07:01.438 Doorbell Buffer Config: Supported 00:07:01.438 Get LBA Status Capability: Not Supported 00:07:01.438 Command & Feature Lockdown Capability: Not Supported 00:07:01.438 Abort Command Limit: 4 00:07:01.438 Async Event Request Limit: 4 00:07:01.438 Number of Firmware Slots: N/A 00:07:01.438 Firmware Slot 1 Read-Only: N/A 00:07:01.438 Firmware Activation Without Reset: N/A 00:07:01.438 Multiple Update Detection Support: N/A 00:07:01.438 Firmware Update Granularity: No Information Provided 00:07:01.438 Per-Namespace SMART Log: Yes 00:07:01.438 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.438 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:01.438 Command Effects Log Page: Supported 00:07:01.438 Get Log Page Extended Data: Supported 00:07:01.438 Telemetry Log Pages: Not Supported 00:07:01.438 Persistent Event Log Pages: Not Supported 00:07:01.438 Supported Log Pages Log Page: May Support 00:07:01.438 Commands Supported & Effects Log Page: Not Supported 00:07:01.438 Feature Identifiers & Effects Log Page:May Support 00:07:01.438 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.438 Data Area 4 for Telemetry Log: Not Supported 00:07:01.438 Error Log Page Entries Supported: 1 00:07:01.438 Keep Alive: Not Supported 00:07:01.438 00:07:01.438 NVM Command Set Attributes 00:07:01.438 ========================== 00:07:01.438 Submission Queue Entry Size 00:07:01.438 Max: 64 00:07:01.438 Min: 64 00:07:01.438 Completion Queue Entry Size 00:07:01.438 Max: 16 00:07:01.438 Min: 16 00:07:01.438 Number of Namespaces: 256 00:07:01.438 Compare Command: Supported 00:07:01.438 Write Uncorrectable Command: Not Supported 00:07:01.438 Dataset Management Command: Supported 00:07:01.438 Write Zeroes Command: Supported 00:07:01.438 Set Features Save Field: Supported 00:07:01.438 Reservations: Not Supported 00:07:01.438 Timestamp: Supported 00:07:01.438 Copy: Supported 00:07:01.438 Volatile Write Cache: Present 00:07:01.438 Atomic Write Unit (Normal): 1 00:07:01.438 Atomic Write Unit (PFail): 1 00:07:01.438 Atomic Compare & Write Unit: 1 00:07:01.438 Fused Compare & Write: Not Supported 00:07:01.438 Scatter-Gather List 00:07:01.438 SGL Command Set: Supported 00:07:01.438 SGL Keyed: Not Supported 00:07:01.438 SGL Bit Bucket Descriptor: Not Supported 00:07:01.438 SGL Metadata Pointer: Not Supported 00:07:01.438 Oversized SGL: Not Supported 00:07:01.438 SGL Metadata Address: Not Supported 00:07:01.438 SGL Offset: Not Supported 00:07:01.438 Transport SGL Data Block: Not Supported 00:07:01.438 Replay Protected Memory Block: Not Supported 00:07:01.438 00:07:01.438 Firmware Slot Information 00:07:01.438 ========================= 00:07:01.438 Active slot: 1 00:07:01.438 Slot 1 Firmware Revision: 1.0 00:07:01.438 00:07:01.438 00:07:01.438 Commands Supported and Effects 00:07:01.438 ============================== 00:07:01.438 Admin Commands 00:07:01.438 -------------- 00:07:01.438 Delete I/O Submission Queue (00h): Supported 00:07:01.438 Create I/O Submission Queue (01h): Supported 00:07:01.438 Get Log Page (02h): Supported 00:07:01.438 Delete I/O Completion Queue (04h): Supported 00:07:01.438 Create I/O Completion Queue (05h): Supported 00:07:01.438 Identify (06h): Supported 00:07:01.438 Abort (08h): Supported 00:07:01.438 Set Features (09h): Supported 00:07:01.438 Get Features (0Ah): Supported 00:07:01.438 Asynchronous Event Request (0Ch): Supported 00:07:01.438 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.438 Directive Send (19h): Supported 00:07:01.439 Directive Receive (1Ah): Supported 00:07:01.439 Virtualization Management (1Ch): Supported 00:07:01.439 Doorbell Buffer Config (7Ch): Supported 00:07:01.439 Format NVM (80h): Supported LBA-Change 00:07:01.439 I/O Commands 00:07:01.439 ------------ 00:07:01.439 Flush (00h): Supported LBA-Change 00:07:01.439 Write (01h): Supported LBA-Change 00:07:01.439 Read (02h): Supported 00:07:01.439 Compare (05h): Supported 00:07:01.439 Write Zeroes (08h): Supported LBA-Change 00:07:01.439 Dataset Management (09h): Supported LBA-Change 00:07:01.439 Unknown (0Ch): Supported 00:07:01.439 Unknown (12h): Supported 00:07:01.439 Copy (19h): Supported LBA-Change 00:07:01.439 Unknown (1Dh): Supported LBA-Change 00:07:01.439 00:07:01.439 Error Log 00:07:01.439 ========= 00:07:01.439 00:07:01.439 Arbitration 00:07:01.439 =========== 00:07:01.439 Arbitration Burst: no limit 00:07:01.439 00:07:01.439 Power Management 00:07:01.439 ================ 00:07:01.439 Number of Power States: 1 00:07:01.439 Current Power State: Power State #0 00:07:01.439 Power State #0: 00:07:01.439 Max Power: 25.00 W 00:07:01.439 Non-Operational State: Operational 00:07:01.439 Entry Latency: 16 microseconds 00:07:01.439 Exit Latency: 4 microseconds 00:07:01.439 Relative Read Throughput: 0 00:07:01.439 Relative Read Latency: 0 00:07:01.439 Relative Write Throughput: 0 00:07:01.439 Relative Write Latency: 0 00:07:01.439 Idle Power: Not Reported 00:07:01.439 Active Power: Not Reported 00:07:01.439 Non-Operational Permissive Mode: Not Supported 00:07:01.439 00:07:01.439 Health Information 00:07:01.439 ================== 00:07:01.439 Critical Warnings: 00:07:01.439 Available Spare Space: OK 00:07:01.439 Temperature: OK 00:07:01.439 Device Reliability: OK 00:07:01.439 Read Only: No 00:07:01.439 Volatile Memory Backup: OK 00:07:01.439 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.439 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.439 Available Spare: 0% 00:07:01.439 Available Spare Threshold: 0% 00:07:01.439 Life Percentage Used: 0% 00:07:01.439 Data Units Read: 1017 00:07:01.439 Data Units Written: 884 00:07:01.439 Host Read Commands: 52707 00:07:01.439 Host Write Commands: 51493 00:07:01.439 Controller Busy Time: 0 minutes 00:07:01.439 Power Cycles: 0 00:07:01.439 Power On Hours: 0 hours 00:07:01.439 Unsafe Shutdowns: 0 00:07:01.439 Unrecoverable Media Errors: 0 00:07:01.439 Lifetime Error Log Entries: 0 00:07:01.439 Warning Temperature Time: 0 minutes 00:07:01.439 Critical Temperature Time: 0 minutes 00:07:01.439 00:07:01.439 Number of Queues 00:07:01.439 ================ 00:07:01.439 Number of I/O Submission Queues: 64 00:07:01.439 Number of I/O Completion Queues: 64 00:07:01.439 00:07:01.439 ZNS Specific Controller Data 00:07:01.439 ============================ 00:07:01.439 Zone Append Size Limit: 0 00:07:01.439 00:07:01.439 00:07:01.439 Active Namespaces 00:07:01.439 ================= 00:07:01.439 Namespace ID:1 00:07:01.439 Error Recovery Timeout: Unlimited 00:07:01.439 Command Set Identifier: NVM (00h) 00:07:01.439 Deallocate: Supported 00:07:01.439 Deallocated/Unwritten Error: Supported 00:07:01.439 Deallocated Read Value: All 0x00 00:07:01.439 Deallocate in Write Zeroes: Not Supported 00:07:01.439 Deallocated Guard Field: 0xFFFF 00:07:01.439 Flush: Supported 00:07:01.439 Reservation: Not Supported 00:07:01.439 Namespace Sharing Capabilities: Private 00:07:01.439 Size (in LBAs): 1310720 (5GiB) 00:07:01.439 Capacity (in LBAs): 1310720 (5GiB) 00:07:01.439 Utilization (in LBAs): 1310720 (5GiB) 00:07:01.439 Thin Provisioning: Not Supported 00:07:01.439 Per-NS Atomic Units: No 00:07:01.439 Maximum Single Source Range Length: 128 00:07:01.439 Maximum Copy Length: 128 00:07:01.439 Maximum Source Range Count: 128 00:07:01.439 NGUID/EUI64 Never Reused: No 00:07:01.439 Namespace Write Protected: No 00:07:01.439 Number of LBA Formats: 8 00:07:01.439 Current LBA Format: LBA Format #04 00:07:01.439 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.439 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.439 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.439 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.439 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.439 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.439 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.439 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.439 00:07:01.439 NVM Specific Namespace Data 00:07:01.439 =========================== 00:07:01.439 Logical Block Storage Tag Mask: 0 00:07:01.439 Protection Information Capabilities: 00:07:01.439 16b Guard Protection Information Storage Tag Support: No 00:07:01.439 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.439 Storage Tag Check Read Support: No 00:07:01.439 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.439 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.439 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.439 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.439 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.439 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.439 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.439 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.439 01:19:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:01.439 01:19:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:01.697 ===================================================== 00:07:01.698 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:01.698 ===================================================== 00:07:01.698 Controller Capabilities/Features 00:07:01.698 ================================ 00:07:01.698 Vendor ID: 1b36 00:07:01.698 Subsystem Vendor ID: 1af4 00:07:01.698 Serial Number: 12342 00:07:01.698 Model Number: QEMU NVMe Ctrl 00:07:01.698 Firmware Version: 8.0.0 00:07:01.698 Recommended Arb Burst: 6 00:07:01.698 IEEE OUI Identifier: 00 54 52 00:07:01.698 Multi-path I/O 00:07:01.698 May have multiple subsystem ports: No 00:07:01.698 May have multiple controllers: No 00:07:01.698 Associated with SR-IOV VF: No 00:07:01.698 Max Data Transfer Size: 524288 00:07:01.698 Max Number of Namespaces: 256 00:07:01.698 Max Number of I/O Queues: 64 00:07:01.698 NVMe Specification Version (VS): 1.4 00:07:01.698 NVMe Specification Version (Identify): 1.4 00:07:01.698 Maximum Queue Entries: 2048 00:07:01.698 Contiguous Queues Required: Yes 00:07:01.698 Arbitration Mechanisms Supported 00:07:01.698 Weighted Round Robin: Not Supported 00:07:01.698 Vendor Specific: Not Supported 00:07:01.698 Reset Timeout: 7500 ms 00:07:01.698 Doorbell Stride: 4 bytes 00:07:01.698 NVM Subsystem Reset: Not Supported 00:07:01.698 Command Sets Supported 00:07:01.698 NVM Command Set: Supported 00:07:01.698 Boot Partition: Not Supported 00:07:01.698 Memory Page Size Minimum: 4096 bytes 00:07:01.698 Memory Page Size Maximum: 65536 bytes 00:07:01.698 Persistent Memory Region: Not Supported 00:07:01.698 Optional Asynchronous Events Supported 00:07:01.698 Namespace Attribute Notices: Supported 00:07:01.698 Firmware Activation Notices: Not Supported 00:07:01.698 ANA Change Notices: Not Supported 00:07:01.698 PLE Aggregate Log Change Notices: Not Supported 00:07:01.698 LBA Status Info Alert Notices: Not Supported 00:07:01.698 EGE Aggregate Log Change Notices: Not Supported 00:07:01.698 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.698 Zone Descriptor Change Notices: Not Supported 00:07:01.698 Discovery Log Change Notices: Not Supported 00:07:01.698 Controller Attributes 00:07:01.698 128-bit Host Identifier: Not Supported 00:07:01.698 Non-Operational Permissive Mode: Not Supported 00:07:01.698 NVM Sets: Not Supported 00:07:01.698 Read Recovery Levels: Not Supported 00:07:01.698 Endurance Groups: Not Supported 00:07:01.698 Predictable Latency Mode: Not Supported 00:07:01.698 Traffic Based Keep ALive: Not Supported 00:07:01.698 Namespace Granularity: Not Supported 00:07:01.698 SQ Associations: Not Supported 00:07:01.698 UUID List: Not Supported 00:07:01.698 Multi-Domain Subsystem: Not Supported 00:07:01.698 Fixed Capacity Management: Not Supported 00:07:01.698 Variable Capacity Management: Not Supported 00:07:01.698 Delete Endurance Group: Not Supported 00:07:01.698 Delete NVM Set: Not Supported 00:07:01.698 Extended LBA Formats Supported: Supported 00:07:01.698 Flexible Data Placement Supported: Not Supported 00:07:01.698 00:07:01.698 Controller Memory Buffer Support 00:07:01.698 ================================ 00:07:01.698 Supported: No 00:07:01.698 00:07:01.698 Persistent Memory Region Support 00:07:01.698 ================================ 00:07:01.698 Supported: No 00:07:01.698 00:07:01.698 Admin Command Set Attributes 00:07:01.698 ============================ 00:07:01.698 Security Send/Receive: Not Supported 00:07:01.698 Format NVM: Supported 00:07:01.698 Firmware Activate/Download: Not Supported 00:07:01.698 Namespace Management: Supported 00:07:01.698 Device Self-Test: Not Supported 00:07:01.698 Directives: Supported 00:07:01.698 NVMe-MI: Not Supported 00:07:01.698 Virtualization Management: Not Supported 00:07:01.698 Doorbell Buffer Config: Supported 00:07:01.698 Get LBA Status Capability: Not Supported 00:07:01.698 Command & Feature Lockdown Capability: Not Supported 00:07:01.698 Abort Command Limit: 4 00:07:01.698 Async Event Request Limit: 4 00:07:01.698 Number of Firmware Slots: N/A 00:07:01.698 Firmware Slot 1 Read-Only: N/A 00:07:01.698 Firmware Activation Without Reset: N/A 00:07:01.698 Multiple Update Detection Support: N/A 00:07:01.698 Firmware Update Granularity: No Information Provided 00:07:01.698 Per-Namespace SMART Log: Yes 00:07:01.698 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.698 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:01.698 Command Effects Log Page: Supported 00:07:01.698 Get Log Page Extended Data: Supported 00:07:01.698 Telemetry Log Pages: Not Supported 00:07:01.698 Persistent Event Log Pages: Not Supported 00:07:01.698 Supported Log Pages Log Page: May Support 00:07:01.698 Commands Supported & Effects Log Page: Not Supported 00:07:01.698 Feature Identifiers & Effects Log Page:May Support 00:07:01.698 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.698 Data Area 4 for Telemetry Log: Not Supported 00:07:01.698 Error Log Page Entries Supported: 1 00:07:01.698 Keep Alive: Not Supported 00:07:01.698 00:07:01.698 NVM Command Set Attributes 00:07:01.698 ========================== 00:07:01.698 Submission Queue Entry Size 00:07:01.698 Max: 64 00:07:01.698 Min: 64 00:07:01.698 Completion Queue Entry Size 00:07:01.698 Max: 16 00:07:01.698 Min: 16 00:07:01.698 Number of Namespaces: 256 00:07:01.698 Compare Command: Supported 00:07:01.698 Write Uncorrectable Command: Not Supported 00:07:01.698 Dataset Management Command: Supported 00:07:01.698 Write Zeroes Command: Supported 00:07:01.698 Set Features Save Field: Supported 00:07:01.698 Reservations: Not Supported 00:07:01.698 Timestamp: Supported 00:07:01.698 Copy: Supported 00:07:01.698 Volatile Write Cache: Present 00:07:01.698 Atomic Write Unit (Normal): 1 00:07:01.698 Atomic Write Unit (PFail): 1 00:07:01.698 Atomic Compare & Write Unit: 1 00:07:01.698 Fused Compare & Write: Not Supported 00:07:01.698 Scatter-Gather List 00:07:01.698 SGL Command Set: Supported 00:07:01.698 SGL Keyed: Not Supported 00:07:01.698 SGL Bit Bucket Descriptor: Not Supported 00:07:01.698 SGL Metadata Pointer: Not Supported 00:07:01.698 Oversized SGL: Not Supported 00:07:01.698 SGL Metadata Address: Not Supported 00:07:01.698 SGL Offset: Not Supported 00:07:01.698 Transport SGL Data Block: Not Supported 00:07:01.698 Replay Protected Memory Block: Not Supported 00:07:01.698 00:07:01.698 Firmware Slot Information 00:07:01.698 ========================= 00:07:01.698 Active slot: 1 00:07:01.698 Slot 1 Firmware Revision: 1.0 00:07:01.698 00:07:01.698 00:07:01.698 Commands Supported and Effects 00:07:01.698 ============================== 00:07:01.698 Admin Commands 00:07:01.698 -------------- 00:07:01.698 Delete I/O Submission Queue (00h): Supported 00:07:01.698 Create I/O Submission Queue (01h): Supported 00:07:01.698 Get Log Page (02h): Supported 00:07:01.698 Delete I/O Completion Queue (04h): Supported 00:07:01.698 Create I/O Completion Queue (05h): Supported 00:07:01.698 Identify (06h): Supported 00:07:01.698 Abort (08h): Supported 00:07:01.698 Set Features (09h): Supported 00:07:01.698 Get Features (0Ah): Supported 00:07:01.698 Asynchronous Event Request (0Ch): Supported 00:07:01.698 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.698 Directive Send (19h): Supported 00:07:01.698 Directive Receive (1Ah): Supported 00:07:01.698 Virtualization Management (1Ch): Supported 00:07:01.698 Doorbell Buffer Config (7Ch): Supported 00:07:01.698 Format NVM (80h): Supported LBA-Change 00:07:01.698 I/O Commands 00:07:01.698 ------------ 00:07:01.698 Flush (00h): Supported LBA-Change 00:07:01.698 Write (01h): Supported LBA-Change 00:07:01.698 Read (02h): Supported 00:07:01.698 Compare (05h): Supported 00:07:01.698 Write Zeroes (08h): Supported LBA-Change 00:07:01.698 Dataset Management (09h): Supported LBA-Change 00:07:01.698 Unknown (0Ch): Supported 00:07:01.698 Unknown (12h): Supported 00:07:01.698 Copy (19h): Supported LBA-Change 00:07:01.698 Unknown (1Dh): Supported LBA-Change 00:07:01.698 00:07:01.698 Error Log 00:07:01.698 ========= 00:07:01.698 00:07:01.698 Arbitration 00:07:01.698 =========== 00:07:01.698 Arbitration Burst: no limit 00:07:01.698 00:07:01.698 Power Management 00:07:01.698 ================ 00:07:01.698 Number of Power States: 1 00:07:01.698 Current Power State: Power State #0 00:07:01.698 Power State #0: 00:07:01.698 Max Power: 25.00 W 00:07:01.698 Non-Operational State: Operational 00:07:01.698 Entry Latency: 16 microseconds 00:07:01.698 Exit Latency: 4 microseconds 00:07:01.698 Relative Read Throughput: 0 00:07:01.698 Relative Read Latency: 0 00:07:01.698 Relative Write Throughput: 0 00:07:01.698 Relative Write Latency: 0 00:07:01.698 Idle Power: Not Reported 00:07:01.699 Active Power: Not Reported 00:07:01.699 Non-Operational Permissive Mode: Not Supported 00:07:01.699 00:07:01.699 Health Information 00:07:01.699 ================== 00:07:01.699 Critical Warnings: 00:07:01.699 Available Spare Space: OK 00:07:01.699 Temperature: OK 00:07:01.699 Device Reliability: OK 00:07:01.699 Read Only: No 00:07:01.699 Volatile Memory Backup: OK 00:07:01.699 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.699 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.699 Available Spare: 0% 00:07:01.699 Available Spare Threshold: 0% 00:07:01.699 Life Percentage Used: 0% 00:07:01.699 Data Units Read: 2153 00:07:01.699 Data Units Written: 1940 00:07:01.699 Host Read Commands: 108980 00:07:01.699 Host Write Commands: 107252 00:07:01.699 Controller Busy Time: 0 minutes 00:07:01.699 Power Cycles: 0 00:07:01.699 Power On Hours: 0 hours 00:07:01.699 Unsafe Shutdowns: 0 00:07:01.699 Unrecoverable Media Errors: 0 00:07:01.699 Lifetime Error Log Entries: 0 00:07:01.699 Warning Temperature Time: 0 minutes 00:07:01.699 Critical Temperature Time: 0 minutes 00:07:01.699 00:07:01.699 Number of Queues 00:07:01.699 ================ 00:07:01.699 Number of I/O Submission Queues: 64 00:07:01.699 Number of I/O Completion Queues: 64 00:07:01.699 00:07:01.699 ZNS Specific Controller Data 00:07:01.699 ============================ 00:07:01.699 Zone Append Size Limit: 0 00:07:01.699 00:07:01.699 00:07:01.699 Active Namespaces 00:07:01.699 ================= 00:07:01.699 Namespace ID:1 00:07:01.699 Error Recovery Timeout: Unlimited 00:07:01.699 Command Set Identifier: NVM (00h) 00:07:01.699 Deallocate: Supported 00:07:01.699 Deallocated/Unwritten Error: Supported 00:07:01.699 Deallocated Read Value: All 0x00 00:07:01.699 Deallocate in Write Zeroes: Not Supported 00:07:01.699 Deallocated Guard Field: 0xFFFF 00:07:01.699 Flush: Supported 00:07:01.699 Reservation: Not Supported 00:07:01.699 Namespace Sharing Capabilities: Private 00:07:01.699 Size (in LBAs): 1048576 (4GiB) 00:07:01.699 Capacity (in LBAs): 1048576 (4GiB) 00:07:01.699 Utilization (in LBAs): 1048576 (4GiB) 00:07:01.699 Thin Provisioning: Not Supported 00:07:01.699 Per-NS Atomic Units: No 00:07:01.699 Maximum Single Source Range Length: 128 00:07:01.699 Maximum Copy Length: 128 00:07:01.699 Maximum Source Range Count: 128 00:07:01.699 NGUID/EUI64 Never Reused: No 00:07:01.699 Namespace Write Protected: No 00:07:01.699 Number of LBA Formats: 8 00:07:01.699 Current LBA Format: LBA Format #04 00:07:01.699 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.699 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.699 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.699 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.699 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.699 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.699 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.699 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.699 00:07:01.699 NVM Specific Namespace Data 00:07:01.699 =========================== 00:07:01.699 Logical Block Storage Tag Mask: 0 00:07:01.699 Protection Information Capabilities: 00:07:01.699 16b Guard Protection Information Storage Tag Support: No 00:07:01.699 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.699 Storage Tag Check Read Support: No 00:07:01.699 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Namespace ID:2 00:07:01.699 Error Recovery Timeout: Unlimited 00:07:01.699 Command Set Identifier: NVM (00h) 00:07:01.699 Deallocate: Supported 00:07:01.699 Deallocated/Unwritten Error: Supported 00:07:01.699 Deallocated Read Value: All 0x00 00:07:01.699 Deallocate in Write Zeroes: Not Supported 00:07:01.699 Deallocated Guard Field: 0xFFFF 00:07:01.699 Flush: Supported 00:07:01.699 Reservation: Not Supported 00:07:01.699 Namespace Sharing Capabilities: Private 00:07:01.699 Size (in LBAs): 1048576 (4GiB) 00:07:01.699 Capacity (in LBAs): 1048576 (4GiB) 00:07:01.699 Utilization (in LBAs): 1048576 (4GiB) 00:07:01.699 Thin Provisioning: Not Supported 00:07:01.699 Per-NS Atomic Units: No 00:07:01.699 Maximum Single Source Range Length: 128 00:07:01.699 Maximum Copy Length: 128 00:07:01.699 Maximum Source Range Count: 128 00:07:01.699 NGUID/EUI64 Never Reused: No 00:07:01.699 Namespace Write Protected: No 00:07:01.699 Number of LBA Formats: 8 00:07:01.699 Current LBA Format: LBA Format #04 00:07:01.699 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.699 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.699 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.699 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.699 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.699 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.699 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.699 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.699 00:07:01.699 NVM Specific Namespace Data 00:07:01.699 =========================== 00:07:01.699 Logical Block Storage Tag Mask: 0 00:07:01.699 Protection Information Capabilities: 00:07:01.699 16b Guard Protection Information Storage Tag Support: No 00:07:01.699 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.699 Storage Tag Check Read Support: No 00:07:01.699 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Namespace ID:3 00:07:01.699 Error Recovery Timeout: Unlimited 00:07:01.699 Command Set Identifier: NVM (00h) 00:07:01.699 Deallocate: Supported 00:07:01.699 Deallocated/Unwritten Error: Supported 00:07:01.699 Deallocated Read Value: All 0x00 00:07:01.699 Deallocate in Write Zeroes: Not Supported 00:07:01.699 Deallocated Guard Field: 0xFFFF 00:07:01.699 Flush: Supported 00:07:01.699 Reservation: Not Supported 00:07:01.699 Namespace Sharing Capabilities: Private 00:07:01.699 Size (in LBAs): 1048576 (4GiB) 00:07:01.699 Capacity (in LBAs): 1048576 (4GiB) 00:07:01.699 Utilization (in LBAs): 1048576 (4GiB) 00:07:01.699 Thin Provisioning: Not Supported 00:07:01.699 Per-NS Atomic Units: No 00:07:01.699 Maximum Single Source Range Length: 128 00:07:01.699 Maximum Copy Length: 128 00:07:01.699 Maximum Source Range Count: 128 00:07:01.699 NGUID/EUI64 Never Reused: No 00:07:01.699 Namespace Write Protected: No 00:07:01.699 Number of LBA Formats: 8 00:07:01.699 Current LBA Format: LBA Format #04 00:07:01.699 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.699 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.699 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.699 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.699 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.699 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.699 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.699 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.699 00:07:01.699 NVM Specific Namespace Data 00:07:01.699 =========================== 00:07:01.699 Logical Block Storage Tag Mask: 0 00:07:01.699 Protection Information Capabilities: 00:07:01.699 16b Guard Protection Information Storage Tag Support: No 00:07:01.699 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.699 Storage Tag Check Read Support: No 00:07:01.699 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.699 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.700 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.700 01:19:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:01.700 01:19:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:01.960 ===================================================== 00:07:01.960 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:01.960 ===================================================== 00:07:01.960 Controller Capabilities/Features 00:07:01.960 ================================ 00:07:01.960 Vendor ID: 1b36 00:07:01.960 Subsystem Vendor ID: 1af4 00:07:01.960 Serial Number: 12343 00:07:01.960 Model Number: QEMU NVMe Ctrl 00:07:01.960 Firmware Version: 8.0.0 00:07:01.960 Recommended Arb Burst: 6 00:07:01.960 IEEE OUI Identifier: 00 54 52 00:07:01.960 Multi-path I/O 00:07:01.960 May have multiple subsystem ports: No 00:07:01.960 May have multiple controllers: Yes 00:07:01.960 Associated with SR-IOV VF: No 00:07:01.960 Max Data Transfer Size: 524288 00:07:01.960 Max Number of Namespaces: 256 00:07:01.960 Max Number of I/O Queues: 64 00:07:01.960 NVMe Specification Version (VS): 1.4 00:07:01.960 NVMe Specification Version (Identify): 1.4 00:07:01.960 Maximum Queue Entries: 2048 00:07:01.960 Contiguous Queues Required: Yes 00:07:01.960 Arbitration Mechanisms Supported 00:07:01.960 Weighted Round Robin: Not Supported 00:07:01.960 Vendor Specific: Not Supported 00:07:01.960 Reset Timeout: 7500 ms 00:07:01.960 Doorbell Stride: 4 bytes 00:07:01.960 NVM Subsystem Reset: Not Supported 00:07:01.960 Command Sets Supported 00:07:01.960 NVM Command Set: Supported 00:07:01.960 Boot Partition: Not Supported 00:07:01.960 Memory Page Size Minimum: 4096 bytes 00:07:01.960 Memory Page Size Maximum: 65536 bytes 00:07:01.960 Persistent Memory Region: Not Supported 00:07:01.960 Optional Asynchronous Events Supported 00:07:01.960 Namespace Attribute Notices: Supported 00:07:01.960 Firmware Activation Notices: Not Supported 00:07:01.960 ANA Change Notices: Not Supported 00:07:01.960 PLE Aggregate Log Change Notices: Not Supported 00:07:01.960 LBA Status Info Alert Notices: Not Supported 00:07:01.960 EGE Aggregate Log Change Notices: Not Supported 00:07:01.960 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.960 Zone Descriptor Change Notices: Not Supported 00:07:01.960 Discovery Log Change Notices: Not Supported 00:07:01.960 Controller Attributes 00:07:01.960 128-bit Host Identifier: Not Supported 00:07:01.960 Non-Operational Permissive Mode: Not Supported 00:07:01.960 NVM Sets: Not Supported 00:07:01.960 Read Recovery Levels: Not Supported 00:07:01.960 Endurance Groups: Supported 00:07:01.960 Predictable Latency Mode: Not Supported 00:07:01.960 Traffic Based Keep ALive: Not Supported 00:07:01.960 Namespace Granularity: Not Supported 00:07:01.960 SQ Associations: Not Supported 00:07:01.960 UUID List: Not Supported 00:07:01.960 Multi-Domain Subsystem: Not Supported 00:07:01.960 Fixed Capacity Management: Not Supported 00:07:01.960 Variable Capacity Management: Not Supported 00:07:01.960 Delete Endurance Group: Not Supported 00:07:01.960 Delete NVM Set: Not Supported 00:07:01.960 Extended LBA Formats Supported: Supported 00:07:01.960 Flexible Data Placement Supported: Supported 00:07:01.961 00:07:01.961 Controller Memory Buffer Support 00:07:01.961 ================================ 00:07:01.961 Supported: No 00:07:01.961 00:07:01.961 Persistent Memory Region Support 00:07:01.961 ================================ 00:07:01.961 Supported: No 00:07:01.961 00:07:01.961 Admin Command Set Attributes 00:07:01.961 ============================ 00:07:01.961 Security Send/Receive: Not Supported 00:07:01.961 Format NVM: Supported 00:07:01.961 Firmware Activate/Download: Not Supported 00:07:01.961 Namespace Management: Supported 00:07:01.961 Device Self-Test: Not Supported 00:07:01.961 Directives: Supported 00:07:01.961 NVMe-MI: Not Supported 00:07:01.961 Virtualization Management: Not Supported 00:07:01.961 Doorbell Buffer Config: Supported 00:07:01.961 Get LBA Status Capability: Not Supported 00:07:01.961 Command & Feature Lockdown Capability: Not Supported 00:07:01.961 Abort Command Limit: 4 00:07:01.961 Async Event Request Limit: 4 00:07:01.961 Number of Firmware Slots: N/A 00:07:01.961 Firmware Slot 1 Read-Only: N/A 00:07:01.961 Firmware Activation Without Reset: N/A 00:07:01.961 Multiple Update Detection Support: N/A 00:07:01.961 Firmware Update Granularity: No Information Provided 00:07:01.961 Per-Namespace SMART Log: Yes 00:07:01.961 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.961 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:01.961 Command Effects Log Page: Supported 00:07:01.961 Get Log Page Extended Data: Supported 00:07:01.961 Telemetry Log Pages: Not Supported 00:07:01.961 Persistent Event Log Pages: Not Supported 00:07:01.961 Supported Log Pages Log Page: May Support 00:07:01.961 Commands Supported & Effects Log Page: Not Supported 00:07:01.961 Feature Identifiers & Effects Log Page:May Support 00:07:01.961 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.961 Data Area 4 for Telemetry Log: Not Supported 00:07:01.961 Error Log Page Entries Supported: 1 00:07:01.961 Keep Alive: Not Supported 00:07:01.961 00:07:01.961 NVM Command Set Attributes 00:07:01.961 ========================== 00:07:01.961 Submission Queue Entry Size 00:07:01.961 Max: 64 00:07:01.961 Min: 64 00:07:01.961 Completion Queue Entry Size 00:07:01.961 Max: 16 00:07:01.961 Min: 16 00:07:01.961 Number of Namespaces: 256 00:07:01.961 Compare Command: Supported 00:07:01.961 Write Uncorrectable Command: Not Supported 00:07:01.961 Dataset Management Command: Supported 00:07:01.961 Write Zeroes Command: Supported 00:07:01.961 Set Features Save Field: Supported 00:07:01.961 Reservations: Not Supported 00:07:01.961 Timestamp: Supported 00:07:01.961 Copy: Supported 00:07:01.961 Volatile Write Cache: Present 00:07:01.961 Atomic Write Unit (Normal): 1 00:07:01.961 Atomic Write Unit (PFail): 1 00:07:01.961 Atomic Compare & Write Unit: 1 00:07:01.961 Fused Compare & Write: Not Supported 00:07:01.961 Scatter-Gather List 00:07:01.961 SGL Command Set: Supported 00:07:01.961 SGL Keyed: Not Supported 00:07:01.961 SGL Bit Bucket Descriptor: Not Supported 00:07:01.961 SGL Metadata Pointer: Not Supported 00:07:01.961 Oversized SGL: Not Supported 00:07:01.961 SGL Metadata Address: Not Supported 00:07:01.961 SGL Offset: Not Supported 00:07:01.961 Transport SGL Data Block: Not Supported 00:07:01.961 Replay Protected Memory Block: Not Supported 00:07:01.961 00:07:01.961 Firmware Slot Information 00:07:01.961 ========================= 00:07:01.961 Active slot: 1 00:07:01.961 Slot 1 Firmware Revision: 1.0 00:07:01.961 00:07:01.961 00:07:01.961 Commands Supported and Effects 00:07:01.961 ============================== 00:07:01.961 Admin Commands 00:07:01.961 -------------- 00:07:01.961 Delete I/O Submission Queue (00h): Supported 00:07:01.961 Create I/O Submission Queue (01h): Supported 00:07:01.961 Get Log Page (02h): Supported 00:07:01.961 Delete I/O Completion Queue (04h): Supported 00:07:01.961 Create I/O Completion Queue (05h): Supported 00:07:01.961 Identify (06h): Supported 00:07:01.961 Abort (08h): Supported 00:07:01.961 Set Features (09h): Supported 00:07:01.961 Get Features (0Ah): Supported 00:07:01.961 Asynchronous Event Request (0Ch): Supported 00:07:01.961 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.961 Directive Send (19h): Supported 00:07:01.961 Directive Receive (1Ah): Supported 00:07:01.961 Virtualization Management (1Ch): Supported 00:07:01.961 Doorbell Buffer Config (7Ch): Supported 00:07:01.961 Format NVM (80h): Supported LBA-Change 00:07:01.961 I/O Commands 00:07:01.961 ------------ 00:07:01.961 Flush (00h): Supported LBA-Change 00:07:01.961 Write (01h): Supported LBA-Change 00:07:01.961 Read (02h): Supported 00:07:01.961 Compare (05h): Supported 00:07:01.961 Write Zeroes (08h): Supported LBA-Change 00:07:01.961 Dataset Management (09h): Supported LBA-Change 00:07:01.961 Unknown (0Ch): Supported 00:07:01.961 Unknown (12h): Supported 00:07:01.961 Copy (19h): Supported LBA-Change 00:07:01.961 Unknown (1Dh): Supported LBA-Change 00:07:01.961 00:07:01.961 Error Log 00:07:01.961 ========= 00:07:01.961 00:07:01.961 Arbitration 00:07:01.961 =========== 00:07:01.961 Arbitration Burst: no limit 00:07:01.961 00:07:01.961 Power Management 00:07:01.961 ================ 00:07:01.961 Number of Power States: 1 00:07:01.961 Current Power State: Power State #0 00:07:01.961 Power State #0: 00:07:01.961 Max Power: 25.00 W 00:07:01.961 Non-Operational State: Operational 00:07:01.961 Entry Latency: 16 microseconds 00:07:01.961 Exit Latency: 4 microseconds 00:07:01.961 Relative Read Throughput: 0 00:07:01.961 Relative Read Latency: 0 00:07:01.961 Relative Write Throughput: 0 00:07:01.961 Relative Write Latency: 0 00:07:01.961 Idle Power: Not Reported 00:07:01.961 Active Power: Not Reported 00:07:01.961 Non-Operational Permissive Mode: Not Supported 00:07:01.961 00:07:01.961 Health Information 00:07:01.961 ================== 00:07:01.961 Critical Warnings: 00:07:01.961 Available Spare Space: OK 00:07:01.961 Temperature: OK 00:07:01.961 Device Reliability: OK 00:07:01.961 Read Only: No 00:07:01.961 Volatile Memory Backup: OK 00:07:01.961 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.961 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.961 Available Spare: 0% 00:07:01.961 Available Spare Threshold: 0% 00:07:01.961 Life Percentage Used: 0% 00:07:01.961 Data Units Read: 832 00:07:01.961 Data Units Written: 761 00:07:01.961 Host Read Commands: 37237 00:07:01.961 Host Write Commands: 36660 00:07:01.961 Controller Busy Time: 0 minutes 00:07:01.961 Power Cycles: 0 00:07:01.961 Power On Hours: 0 hours 00:07:01.961 Unsafe Shutdowns: 0 00:07:01.961 Unrecoverable Media Errors: 0 00:07:01.961 Lifetime Error Log Entries: 0 00:07:01.961 Warning Temperature Time: 0 minutes 00:07:01.961 Critical Temperature Time: 0 minutes 00:07:01.961 00:07:01.961 Number of Queues 00:07:01.961 ================ 00:07:01.961 Number of I/O Submission Queues: 64 00:07:01.961 Number of I/O Completion Queues: 64 00:07:01.961 00:07:01.961 ZNS Specific Controller Data 00:07:01.961 ============================ 00:07:01.961 Zone Append Size Limit: 0 00:07:01.961 00:07:01.961 00:07:01.961 Active Namespaces 00:07:01.961 ================= 00:07:01.961 Namespace ID:1 00:07:01.961 Error Recovery Timeout: Unlimited 00:07:01.961 Command Set Identifier: NVM (00h) 00:07:01.961 Deallocate: Supported 00:07:01.961 Deallocated/Unwritten Error: Supported 00:07:01.961 Deallocated Read Value: All 0x00 00:07:01.961 Deallocate in Write Zeroes: Not Supported 00:07:01.961 Deallocated Guard Field: 0xFFFF 00:07:01.961 Flush: Supported 00:07:01.961 Reservation: Not Supported 00:07:01.961 Namespace Sharing Capabilities: Multiple Controllers 00:07:01.961 Size (in LBAs): 262144 (1GiB) 00:07:01.961 Capacity (in LBAs): 262144 (1GiB) 00:07:01.961 Utilization (in LBAs): 262144 (1GiB) 00:07:01.961 Thin Provisioning: Not Supported 00:07:01.961 Per-NS Atomic Units: No 00:07:01.961 Maximum Single Source Range Length: 128 00:07:01.961 Maximum Copy Length: 128 00:07:01.961 Maximum Source Range Count: 128 00:07:01.961 NGUID/EUI64 Never Reused: No 00:07:01.961 Namespace Write Protected: No 00:07:01.961 Endurance group ID: 1 00:07:01.961 Number of LBA Formats: 8 00:07:01.961 Current LBA Format: LBA Format #04 00:07:01.961 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.961 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.961 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.961 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.961 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.961 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.961 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.961 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.961 00:07:01.961 Get Feature FDP: 00:07:01.961 ================ 00:07:01.961 Enabled: Yes 00:07:01.961 FDP configuration index: 0 00:07:01.961 00:07:01.962 FDP configurations log page 00:07:01.962 =========================== 00:07:01.962 Number of FDP configurations: 1 00:07:01.962 Version: 0 00:07:01.962 Size: 112 00:07:01.962 FDP Configuration Descriptor: 0 00:07:01.962 Descriptor Size: 96 00:07:01.962 Reclaim Group Identifier format: 2 00:07:01.962 FDP Volatile Write Cache: Not Present 00:07:01.962 FDP Configuration: Valid 00:07:01.962 Vendor Specific Size: 0 00:07:01.962 Number of Reclaim Groups: 2 00:07:01.962 Number of Recalim Unit Handles: 8 00:07:01.962 Max Placement Identifiers: 128 00:07:01.962 Number of Namespaces Suppprted: 256 00:07:01.962 Reclaim unit Nominal Size: 6000000 bytes 00:07:01.962 Estimated Reclaim Unit Time Limit: Not Reported 00:07:01.962 RUH Desc #000: RUH Type: Initially Isolated 00:07:01.962 RUH Desc #001: RUH Type: Initially Isolated 00:07:01.962 RUH Desc #002: RUH Type: Initially Isolated 00:07:01.962 RUH Desc #003: RUH Type: Initially Isolated 00:07:01.962 RUH Desc #004: RUH Type: Initially Isolated 00:07:01.962 RUH Desc #005: RUH Type: Initially Isolated 00:07:01.962 RUH Desc #006: RUH Type: Initially Isolated 00:07:01.962 RUH Desc #007: RUH Type: Initially Isolated 00:07:01.962 00:07:01.962 FDP reclaim unit handle usage log page 00:07:01.962 ====================================== 00:07:01.962 Number of Reclaim Unit Handles: 8 00:07:01.962 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:01.962 RUH Usage Desc #001: RUH Attributes: Unused 00:07:01.962 RUH Usage Desc #002: RUH Attributes: Unused 00:07:01.962 RUH Usage Desc #003: RUH Attributes: Unused 00:07:01.962 RUH Usage Desc #004: RUH Attributes: Unused 00:07:01.962 RUH Usage Desc #005: RUH Attributes: Unused 00:07:01.962 RUH Usage Desc #006: RUH Attributes: Unused 00:07:01.962 RUH Usage Desc #007: RUH Attributes: Unused 00:07:01.962 00:07:01.962 FDP statistics log page 00:07:01.962 ======================= 00:07:01.962 Host bytes with metadata written: 495558656 00:07:01.962 Media bytes with metadata written: 495611904 00:07:01.962 Media bytes erased: 0 00:07:01.962 00:07:01.962 FDP events log page 00:07:01.962 =================== 00:07:01.962 Number of FDP events: 0 00:07:01.962 00:07:01.962 NVM Specific Namespace Data 00:07:01.962 =========================== 00:07:01.962 Logical Block Storage Tag Mask: 0 00:07:01.962 Protection Information Capabilities: 00:07:01.962 16b Guard Protection Information Storage Tag Support: No 00:07:01.962 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.962 Storage Tag Check Read Support: No 00:07:01.962 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.962 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.962 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.962 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.962 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.962 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.962 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.962 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.962 00:07:01.962 real 0m1.113s 00:07:01.962 user 0m0.400s 00:07:01.962 sys 0m0.503s 00:07:01.962 01:19:57 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.962 01:19:57 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:01.962 ************************************ 00:07:01.962 END TEST nvme_identify 00:07:01.962 ************************************ 00:07:01.962 01:19:57 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:01.962 01:19:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.962 01:19:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.962 01:19:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.962 ************************************ 00:07:01.962 START TEST nvme_perf 00:07:01.962 ************************************ 00:07:01.962 01:19:57 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:01.962 01:19:57 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:03.337 Initializing NVMe Controllers 00:07:03.337 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:03.337 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:03.337 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:03.337 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:03.337 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:03.337 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:03.337 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:03.337 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:03.337 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:03.337 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:03.337 Initialization complete. Launching workers. 00:07:03.337 ======================================================== 00:07:03.337 Latency(us) 00:07:03.337 Device Information : IOPS MiB/s Average min max 00:07:03.337 PCIE (0000:00:10.0) NSID 1 from core 0: 17498.85 205.06 7324.01 5615.43 40361.89 00:07:03.337 PCIE (0000:00:11.0) NSID 1 from core 0: 17498.85 205.06 7314.07 5668.22 38954.09 00:07:03.337 PCIE (0000:00:13.0) NSID 1 from core 0: 17498.85 205.06 7302.94 5652.94 38393.26 00:07:03.337 PCIE (0000:00:12.0) NSID 1 from core 0: 17498.85 205.06 7291.63 5632.43 37999.01 00:07:03.337 PCIE (0000:00:12.0) NSID 2 from core 0: 17498.85 205.06 7280.36 5649.28 36560.60 00:07:03.337 PCIE (0000:00:12.0) NSID 3 from core 0: 17562.71 205.81 7242.29 5681.44 28110.06 00:07:03.337 ======================================================== 00:07:03.337 Total : 105056.96 1231.14 7292.52 5615.43 40361.89 00:07:03.337 00:07:03.337 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:03.337 ================================================================================= 00:07:03.337 1.00000% : 5747.003us 00:07:03.337 10.00000% : 5923.446us 00:07:03.337 25.00000% : 6125.095us 00:07:03.337 50.00000% : 6503.188us 00:07:03.337 75.00000% : 6956.898us 00:07:03.337 90.00000% : 10183.286us 00:07:03.337 95.00000% : 12149.366us 00:07:03.337 98.00000% : 13913.797us 00:07:03.337 99.00000% : 16434.412us 00:07:03.337 99.50000% : 31860.578us 00:07:03.337 99.90000% : 39926.548us 00:07:03.337 99.99000% : 40329.846us 00:07:03.337 99.99900% : 40531.495us 00:07:03.337 99.99990% : 40531.495us 00:07:03.337 99.99999% : 40531.495us 00:07:03.337 00:07:03.337 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:03.337 ================================================================================= 00:07:03.337 1.00000% : 5822.622us 00:07:03.337 10.00000% : 5973.858us 00:07:03.337 25.00000% : 6150.302us 00:07:03.337 50.00000% : 6452.775us 00:07:03.337 75.00000% : 6906.486us 00:07:03.337 90.00000% : 10082.462us 00:07:03.337 95.00000% : 11846.892us 00:07:03.337 98.00000% : 14115.446us 00:07:03.337 99.00000% : 16535.237us 00:07:03.337 99.50000% : 30247.385us 00:07:03.337 99.90000% : 38515.003us 00:07:03.337 99.99000% : 39119.951us 00:07:03.337 99.99900% : 39119.951us 00:07:03.337 99.99990% : 39119.951us 00:07:03.337 99.99999% : 39119.951us 00:07:03.337 00:07:03.337 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:03.337 ================================================================================= 00:07:03.337 1.00000% : 5822.622us 00:07:03.337 10.00000% : 5973.858us 00:07:03.337 25.00000% : 6150.302us 00:07:03.337 50.00000% : 6452.775us 00:07:03.337 75.00000% : 6906.486us 00:07:03.338 90.00000% : 10082.462us 00:07:03.338 95.00000% : 11897.305us 00:07:03.338 98.00000% : 14115.446us 00:07:03.338 99.00000% : 16535.237us 00:07:03.338 99.50000% : 29642.437us 00:07:03.338 99.90000% : 37910.055us 00:07:03.338 99.99000% : 38515.003us 00:07:03.338 99.99900% : 38515.003us 00:07:03.338 99.99990% : 38515.003us 00:07:03.338 99.99999% : 38515.003us 00:07:03.338 00:07:03.338 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:03.338 ================================================================================= 00:07:03.338 1.00000% : 5822.622us 00:07:03.338 10.00000% : 5973.858us 00:07:03.338 25.00000% : 6150.302us 00:07:03.338 50.00000% : 6452.775us 00:07:03.338 75.00000% : 6856.074us 00:07:03.338 90.00000% : 10032.049us 00:07:03.338 95.00000% : 11796.480us 00:07:03.338 98.00000% : 13812.972us 00:07:03.338 99.00000% : 16837.711us 00:07:03.338 99.50000% : 28835.840us 00:07:03.338 99.90000% : 37506.757us 00:07:03.338 99.99000% : 38111.705us 00:07:03.338 99.99900% : 38111.705us 00:07:03.338 99.99990% : 38111.705us 00:07:03.338 99.99999% : 38111.705us 00:07:03.338 00:07:03.338 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:03.338 ================================================================================= 00:07:03.338 1.00000% : 5822.622us 00:07:03.338 10.00000% : 5973.858us 00:07:03.338 25.00000% : 6150.302us 00:07:03.338 50.00000% : 6452.775us 00:07:03.338 75.00000% : 6906.486us 00:07:03.338 90.00000% : 9931.225us 00:07:03.338 95.00000% : 11947.717us 00:07:03.338 98.00000% : 13812.972us 00:07:03.338 99.00000% : 16434.412us 00:07:03.338 99.50000% : 28029.243us 00:07:03.338 99.90000% : 36095.212us 00:07:03.338 99.99000% : 36700.160us 00:07:03.338 99.99900% : 36700.160us 00:07:03.338 99.99990% : 36700.160us 00:07:03.338 99.99999% : 36700.160us 00:07:03.338 00:07:03.338 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:03.338 ================================================================================= 00:07:03.338 1.00000% : 5822.622us 00:07:03.338 10.00000% : 5973.858us 00:07:03.338 25.00000% : 6150.302us 00:07:03.338 50.00000% : 6452.775us 00:07:03.338 75.00000% : 6956.898us 00:07:03.338 90.00000% : 10132.874us 00:07:03.338 95.00000% : 12199.778us 00:07:03.338 98.00000% : 13812.972us 00:07:03.338 99.00000% : 16232.763us 00:07:03.338 99.50000% : 19963.274us 00:07:03.338 99.90000% : 27625.945us 00:07:03.338 99.99000% : 28230.892us 00:07:03.338 99.99900% : 28230.892us 00:07:03.338 99.99990% : 28230.892us 00:07:03.338 99.99999% : 28230.892us 00:07:03.338 00:07:03.338 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:03.338 ============================================================================== 00:07:03.338 Range in us Cumulative IO count 00:07:03.338 5595.766 - 5620.972: 0.0057% ( 1) 00:07:03.338 5620.972 - 5646.178: 0.0570% ( 9) 00:07:03.338 5646.178 - 5671.385: 0.1939% ( 24) 00:07:03.338 5671.385 - 5696.591: 0.4448% ( 44) 00:07:03.338 5696.591 - 5721.797: 0.8269% ( 67) 00:07:03.338 5721.797 - 5747.003: 1.4998% ( 118) 00:07:03.338 5747.003 - 5772.209: 2.3495% ( 149) 00:07:03.338 5772.209 - 5797.415: 3.6211% ( 223) 00:07:03.338 5797.415 - 5822.622: 5.0240% ( 246) 00:07:03.338 5822.622 - 5847.828: 6.4610% ( 252) 00:07:03.338 5847.828 - 5873.034: 8.0121% ( 272) 00:07:03.338 5873.034 - 5898.240: 9.5404% ( 268) 00:07:03.338 5898.240 - 5923.446: 11.2968% ( 308) 00:07:03.338 5923.446 - 5948.652: 12.9904% ( 297) 00:07:03.338 5948.652 - 5973.858: 14.6841% ( 297) 00:07:03.338 5973.858 - 5999.065: 16.3948% ( 300) 00:07:03.338 5999.065 - 6024.271: 18.3223% ( 338) 00:07:03.338 6024.271 - 6049.477: 20.0844% ( 309) 00:07:03.338 6049.477 - 6074.683: 21.8807% ( 315) 00:07:03.338 6074.683 - 6099.889: 23.6884% ( 317) 00:07:03.338 6099.889 - 6125.095: 25.4790% ( 314) 00:07:03.338 6125.095 - 6150.302: 27.2411% ( 309) 00:07:03.338 6150.302 - 6175.508: 29.2256% ( 348) 00:07:03.338 6175.508 - 6200.714: 30.9307% ( 299) 00:07:03.338 6200.714 - 6225.920: 32.7441% ( 318) 00:07:03.338 6225.920 - 6251.126: 34.6772% ( 339) 00:07:03.338 6251.126 - 6276.332: 36.4507% ( 311) 00:07:03.338 6276.332 - 6301.538: 38.1615% ( 300) 00:07:03.338 6301.538 - 6326.745: 40.1802% ( 354) 00:07:03.338 6326.745 - 6351.951: 41.8625% ( 295) 00:07:03.338 6351.951 - 6377.157: 43.7728% ( 335) 00:07:03.338 6377.157 - 6402.363: 45.6090% ( 322) 00:07:03.338 6402.363 - 6427.569: 47.3882% ( 312) 00:07:03.338 6427.569 - 6452.775: 49.2359% ( 324) 00:07:03.338 6452.775 - 6503.188: 52.9197% ( 646) 00:07:03.338 6503.188 - 6553.600: 56.7632% ( 674) 00:07:03.338 6553.600 - 6604.012: 60.5383% ( 662) 00:07:03.338 6604.012 - 6654.425: 64.1994% ( 642) 00:07:03.338 6654.425 - 6704.837: 67.6722% ( 609) 00:07:03.338 6704.837 - 6755.249: 70.5064% ( 497) 00:07:03.338 6755.249 - 6805.662: 72.5536% ( 359) 00:07:03.338 6805.662 - 6856.074: 73.7797% ( 215) 00:07:03.338 6856.074 - 6906.486: 74.6293% ( 149) 00:07:03.338 6906.486 - 6956.898: 75.2623% ( 111) 00:07:03.338 6956.898 - 7007.311: 75.8155% ( 97) 00:07:03.338 7007.311 - 7057.723: 76.2203% ( 71) 00:07:03.338 7057.723 - 7108.135: 76.5796% ( 63) 00:07:03.338 7108.135 - 7158.548: 76.9218% ( 60) 00:07:03.338 7158.548 - 7208.960: 77.3152% ( 69) 00:07:03.338 7208.960 - 7259.372: 77.7486% ( 76) 00:07:03.338 7259.372 - 7309.785: 78.1022% ( 62) 00:07:03.338 7309.785 - 7360.197: 78.5185% ( 73) 00:07:03.338 7360.197 - 7410.609: 78.9234% ( 71) 00:07:03.338 7410.609 - 7461.022: 79.3510% ( 75) 00:07:03.338 7461.022 - 7511.434: 79.7445% ( 69) 00:07:03.338 7511.434 - 7561.846: 80.1209% ( 66) 00:07:03.338 7561.846 - 7612.258: 80.4859% ( 64) 00:07:03.338 7612.258 - 7662.671: 80.8337% ( 61) 00:07:03.338 7662.671 - 7713.083: 81.1759% ( 60) 00:07:03.338 7713.083 - 7763.495: 81.5408% ( 64) 00:07:03.338 7763.495 - 7813.908: 81.8944% ( 62) 00:07:03.338 7813.908 - 7864.320: 82.2365% ( 60) 00:07:03.338 7864.320 - 7914.732: 82.5730% ( 59) 00:07:03.338 7914.732 - 7965.145: 82.8752% ( 53) 00:07:03.338 7965.145 - 8015.557: 83.1661% ( 51) 00:07:03.338 8015.557 - 8065.969: 83.4797% ( 55) 00:07:03.338 8065.969 - 8116.382: 83.7876% ( 54) 00:07:03.338 8116.382 - 8166.794: 84.0100% ( 39) 00:07:03.338 8166.794 - 8217.206: 84.2781% ( 47) 00:07:03.338 8217.206 - 8267.618: 84.4776% ( 35) 00:07:03.338 8267.618 - 8318.031: 84.7057% ( 40) 00:07:03.338 8318.031 - 8368.443: 84.8939% ( 33) 00:07:03.338 8368.443 - 8418.855: 85.1277% ( 41) 00:07:03.338 8418.855 - 8469.268: 85.3444% ( 38) 00:07:03.338 8469.268 - 8519.680: 85.5212% ( 31) 00:07:03.338 8519.680 - 8570.092: 85.7037% ( 32) 00:07:03.338 8570.092 - 8620.505: 85.8976% ( 34) 00:07:03.338 8620.505 - 8670.917: 86.0744% ( 31) 00:07:03.338 8670.917 - 8721.329: 86.2283% ( 27) 00:07:03.338 8721.329 - 8771.742: 86.3823% ( 27) 00:07:03.338 8771.742 - 8822.154: 86.5363% ( 27) 00:07:03.338 8822.154 - 8872.566: 86.6788% ( 25) 00:07:03.338 8872.566 - 8922.978: 86.8328% ( 27) 00:07:03.338 8922.978 - 8973.391: 86.9754% ( 25) 00:07:03.338 8973.391 - 9023.803: 87.1179% ( 25) 00:07:03.338 9023.803 - 9074.215: 87.2320% ( 20) 00:07:03.338 9074.215 - 9124.628: 87.3859% ( 27) 00:07:03.338 9124.628 - 9175.040: 87.5513% ( 29) 00:07:03.338 9175.040 - 9225.452: 87.7338% ( 32) 00:07:03.338 9225.452 - 9275.865: 87.8821% ( 26) 00:07:03.338 9275.865 - 9326.277: 88.0246% ( 25) 00:07:03.338 9326.277 - 9376.689: 88.1729% ( 26) 00:07:03.338 9376.689 - 9427.102: 88.2812% ( 19) 00:07:03.338 9427.102 - 9477.514: 88.4067% ( 22) 00:07:03.338 9477.514 - 9527.926: 88.5379% ( 23) 00:07:03.338 9527.926 - 9578.338: 88.6405% ( 18) 00:07:03.338 9578.338 - 9628.751: 88.7318% ( 16) 00:07:03.338 9628.751 - 9679.163: 88.8458% ( 20) 00:07:03.338 9679.163 - 9729.575: 88.9484% ( 18) 00:07:03.338 9729.575 - 9779.988: 89.0568% ( 19) 00:07:03.338 9779.988 - 9830.400: 89.1480% ( 16) 00:07:03.338 9830.400 - 9880.812: 89.2279% ( 14) 00:07:03.338 9880.812 - 9931.225: 89.3191% ( 16) 00:07:03.338 9931.225 - 9981.637: 89.4161% ( 17) 00:07:03.338 9981.637 - 10032.049: 89.5586% ( 25) 00:07:03.338 10032.049 - 10082.462: 89.7354% ( 31) 00:07:03.338 10082.462 - 10132.874: 89.9236% ( 33) 00:07:03.338 10132.874 - 10183.286: 90.1289% ( 36) 00:07:03.338 10183.286 - 10233.698: 90.2372% ( 19) 00:07:03.338 10233.698 - 10284.111: 90.4596% ( 39) 00:07:03.339 10284.111 - 10334.523: 90.5794% ( 21) 00:07:03.339 10334.523 - 10384.935: 90.7105% ( 23) 00:07:03.339 10384.935 - 10435.348: 90.8987% ( 33) 00:07:03.339 10435.348 - 10485.760: 91.0584% ( 28) 00:07:03.339 10485.760 - 10536.172: 91.2010% ( 25) 00:07:03.339 10536.172 - 10586.585: 91.3606% ( 28) 00:07:03.339 10586.585 - 10636.997: 91.5146% ( 27) 00:07:03.339 10636.997 - 10687.409: 91.7541% ( 42) 00:07:03.339 10687.409 - 10737.822: 91.9024% ( 26) 00:07:03.339 10737.822 - 10788.234: 92.0849% ( 32) 00:07:03.339 10788.234 - 10838.646: 92.2502% ( 29) 00:07:03.339 10838.646 - 10889.058: 92.4555% ( 36) 00:07:03.339 10889.058 - 10939.471: 92.6209% ( 29) 00:07:03.339 10939.471 - 10989.883: 92.7692% ( 26) 00:07:03.339 10989.883 - 11040.295: 92.9459% ( 31) 00:07:03.339 11040.295 - 11090.708: 93.1569% ( 37) 00:07:03.339 11090.708 - 11141.120: 93.2938% ( 24) 00:07:03.339 11141.120 - 11191.532: 93.4250% ( 23) 00:07:03.339 11191.532 - 11241.945: 93.5447% ( 21) 00:07:03.339 11241.945 - 11292.357: 93.6302% ( 15) 00:07:03.339 11292.357 - 11342.769: 93.7044% ( 13) 00:07:03.339 11342.769 - 11393.182: 93.8184% ( 20) 00:07:03.339 11393.182 - 11443.594: 93.9154% ( 17) 00:07:03.339 11443.594 - 11494.006: 94.0066% ( 16) 00:07:03.339 11494.006 - 11544.418: 94.1036% ( 17) 00:07:03.339 11544.418 - 11594.831: 94.1948% ( 16) 00:07:03.339 11594.831 - 11645.243: 94.2632% ( 12) 00:07:03.339 11645.243 - 11695.655: 94.3488% ( 15) 00:07:03.339 11695.655 - 11746.068: 94.4286% ( 14) 00:07:03.339 11746.068 - 11796.480: 94.5198% ( 16) 00:07:03.339 11796.480 - 11846.892: 94.5940% ( 13) 00:07:03.339 11846.892 - 11897.305: 94.6624% ( 12) 00:07:03.339 11897.305 - 11947.717: 94.7194% ( 10) 00:07:03.339 11947.717 - 11998.129: 94.7879% ( 12) 00:07:03.339 11998.129 - 12048.542: 94.8791% ( 16) 00:07:03.339 12048.542 - 12098.954: 94.9818% ( 18) 00:07:03.339 12098.954 - 12149.366: 95.0787% ( 17) 00:07:03.339 12149.366 - 12199.778: 95.1471% ( 12) 00:07:03.339 12199.778 - 12250.191: 95.2156% ( 12) 00:07:03.339 12250.191 - 12300.603: 95.3125% ( 17) 00:07:03.339 12300.603 - 12351.015: 95.4094% ( 17) 00:07:03.339 12351.015 - 12401.428: 95.4722% ( 11) 00:07:03.339 12401.428 - 12451.840: 95.5520% ( 14) 00:07:03.339 12451.840 - 12502.252: 95.6547% ( 18) 00:07:03.339 12502.252 - 12552.665: 95.7345% ( 14) 00:07:03.339 12552.665 - 12603.077: 95.8599% ( 22) 00:07:03.339 12603.077 - 12653.489: 95.9512% ( 16) 00:07:03.339 12653.489 - 12703.902: 96.0538% ( 18) 00:07:03.339 12703.902 - 12754.314: 96.1736% ( 21) 00:07:03.339 12754.314 - 12804.726: 96.2477% ( 13) 00:07:03.339 12804.726 - 12855.138: 96.3333% ( 15) 00:07:03.339 12855.138 - 12905.551: 96.4074% ( 13) 00:07:03.339 12905.551 - 13006.375: 96.5443% ( 24) 00:07:03.339 13006.375 - 13107.200: 96.7609% ( 38) 00:07:03.339 13107.200 - 13208.025: 96.9149% ( 27) 00:07:03.339 13208.025 - 13308.849: 97.1259% ( 37) 00:07:03.339 13308.849 - 13409.674: 97.2913% ( 29) 00:07:03.339 13409.674 - 13510.498: 97.4567% ( 29) 00:07:03.339 13510.498 - 13611.323: 97.6505% ( 34) 00:07:03.339 13611.323 - 13712.148: 97.7760% ( 22) 00:07:03.339 13712.148 - 13812.972: 97.9186% ( 25) 00:07:03.339 13812.972 - 13913.797: 98.0896% ( 30) 00:07:03.339 13913.797 - 14014.622: 98.1752% ( 15) 00:07:03.339 14014.622 - 14115.446: 98.2607% ( 15) 00:07:03.339 14115.446 - 14216.271: 98.3577% ( 17) 00:07:03.339 14216.271 - 14317.095: 98.4432% ( 15) 00:07:03.339 14317.095 - 14417.920: 98.4831% ( 7) 00:07:03.339 14417.920 - 14518.745: 98.5173% ( 6) 00:07:03.339 14518.745 - 14619.569: 98.5458% ( 5) 00:07:03.339 14619.569 - 14720.394: 98.5687% ( 4) 00:07:03.339 14720.394 - 14821.218: 98.6086% ( 7) 00:07:03.339 14821.218 - 14922.043: 98.6314% ( 4) 00:07:03.339 14922.043 - 15022.868: 98.6656% ( 6) 00:07:03.339 15022.868 - 15123.692: 98.6770% ( 2) 00:07:03.339 15123.692 - 15224.517: 98.7226% ( 8) 00:07:03.339 15224.517 - 15325.342: 98.7397% ( 3) 00:07:03.339 15325.342 - 15426.166: 98.7797% ( 7) 00:07:03.339 15426.166 - 15526.991: 98.7968% ( 3) 00:07:03.339 15526.991 - 15627.815: 98.8139% ( 3) 00:07:03.339 15627.815 - 15728.640: 98.8310% ( 3) 00:07:03.339 15728.640 - 15829.465: 98.8481% ( 3) 00:07:03.339 15829.465 - 15930.289: 98.8652% ( 3) 00:07:03.339 15930.289 - 16031.114: 98.9051% ( 7) 00:07:03.339 16031.114 - 16131.938: 98.9336% ( 5) 00:07:03.339 16131.938 - 16232.763: 98.9678% ( 6) 00:07:03.339 16232.763 - 16333.588: 98.9964% ( 5) 00:07:03.339 16333.588 - 16434.412: 99.0192% ( 4) 00:07:03.339 16434.412 - 16535.237: 99.0363% ( 3) 00:07:03.339 16535.237 - 16636.062: 99.0648% ( 5) 00:07:03.339 16636.062 - 16736.886: 99.0876% ( 4) 00:07:03.339 16736.886 - 16837.711: 99.1104% ( 4) 00:07:03.339 16837.711 - 16938.535: 99.1389% ( 5) 00:07:03.339 16938.535 - 17039.360: 99.1617% ( 4) 00:07:03.339 17039.360 - 17140.185: 99.1902% ( 5) 00:07:03.339 17140.185 - 17241.009: 99.2073% ( 3) 00:07:03.339 17241.009 - 17341.834: 99.2302% ( 4) 00:07:03.339 17341.834 - 17442.658: 99.2530% ( 4) 00:07:03.339 17442.658 - 17543.483: 99.2701% ( 3) 00:07:03.339 30650.683 - 30852.332: 99.3043% ( 6) 00:07:03.339 30852.332 - 31053.982: 99.3442% ( 7) 00:07:03.339 31053.982 - 31255.631: 99.3898% ( 8) 00:07:03.339 31255.631 - 31457.280: 99.4297% ( 7) 00:07:03.339 31457.280 - 31658.929: 99.4811% ( 9) 00:07:03.339 31658.929 - 31860.578: 99.5267% ( 8) 00:07:03.339 31860.578 - 32062.228: 99.5780% ( 9) 00:07:03.339 32062.228 - 32263.877: 99.6179% ( 7) 00:07:03.339 32263.877 - 32465.526: 99.6350% ( 3) 00:07:03.339 38111.705 - 38313.354: 99.6407% ( 1) 00:07:03.339 38313.354 - 38515.003: 99.6693% ( 5) 00:07:03.339 38515.003 - 38716.652: 99.7035% ( 6) 00:07:03.339 38716.652 - 38918.302: 99.7377% ( 6) 00:07:03.339 38918.302 - 39119.951: 99.7776% ( 7) 00:07:03.339 39119.951 - 39321.600: 99.8061% ( 5) 00:07:03.339 39321.600 - 39523.249: 99.8517% ( 8) 00:07:03.339 39523.249 - 39724.898: 99.8745% ( 4) 00:07:03.339 39724.898 - 39926.548: 99.9202% ( 8) 00:07:03.339 39926.548 - 40128.197: 99.9658% ( 8) 00:07:03.339 40128.197 - 40329.846: 99.9943% ( 5) 00:07:03.339 40329.846 - 40531.495: 100.0000% ( 1) 00:07:03.339 00:07:03.339 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:03.339 ============================================================================== 00:07:03.339 Range in us Cumulative IO count 00:07:03.339 5646.178 - 5671.385: 0.0057% ( 1) 00:07:03.339 5671.385 - 5696.591: 0.0342% ( 5) 00:07:03.339 5696.591 - 5721.797: 0.1198% ( 15) 00:07:03.339 5721.797 - 5747.003: 0.2224% ( 18) 00:07:03.339 5747.003 - 5772.209: 0.4619% ( 42) 00:07:03.339 5772.209 - 5797.415: 0.8782% ( 73) 00:07:03.339 5797.415 - 5822.622: 1.4998% ( 109) 00:07:03.339 5822.622 - 5847.828: 2.5091% ( 177) 00:07:03.339 5847.828 - 5873.034: 3.7010% ( 209) 00:07:03.339 5873.034 - 5898.240: 5.2464% ( 271) 00:07:03.339 5898.240 - 5923.446: 6.9913% ( 306) 00:07:03.339 5923.446 - 5948.652: 8.7990% ( 317) 00:07:03.339 5948.652 - 5973.858: 10.7265% ( 338) 00:07:03.339 5973.858 - 5999.065: 12.6882% ( 344) 00:07:03.339 5999.065 - 6024.271: 14.7183% ( 356) 00:07:03.339 6024.271 - 6049.477: 16.8168% ( 368) 00:07:03.339 6049.477 - 6074.683: 18.8241% ( 352) 00:07:03.339 6074.683 - 6099.889: 20.9170% ( 367) 00:07:03.340 6099.889 - 6125.095: 23.1182% ( 386) 00:07:03.340 6125.095 - 6150.302: 25.3878% ( 398) 00:07:03.340 6150.302 - 6175.508: 27.6175% ( 391) 00:07:03.340 6175.508 - 6200.714: 29.8073% ( 384) 00:07:03.340 6200.714 - 6225.920: 31.9343% ( 373) 00:07:03.340 6225.920 - 6251.126: 34.1412% ( 387) 00:07:03.340 6251.126 - 6276.332: 36.3310% ( 384) 00:07:03.340 6276.332 - 6301.538: 38.5721% ( 393) 00:07:03.340 6301.538 - 6326.745: 40.7048% ( 374) 00:07:03.340 6326.745 - 6351.951: 42.8547% ( 377) 00:07:03.340 6351.951 - 6377.157: 45.0502% ( 385) 00:07:03.340 6377.157 - 6402.363: 47.2172% ( 380) 00:07:03.340 6402.363 - 6427.569: 49.3100% ( 367) 00:07:03.340 6427.569 - 6452.775: 51.4941% ( 383) 00:07:03.340 6452.775 - 6503.188: 55.8223% ( 759) 00:07:03.340 6503.188 - 6553.600: 60.1962% ( 767) 00:07:03.340 6553.600 - 6604.012: 64.3362% ( 726) 00:07:03.340 6604.012 - 6654.425: 67.9688% ( 637) 00:07:03.340 6654.425 - 6704.837: 70.7858% ( 494) 00:07:03.340 6704.837 - 6755.249: 72.5137% ( 303) 00:07:03.340 6755.249 - 6805.662: 73.7169% ( 211) 00:07:03.340 6805.662 - 6856.074: 74.5951% ( 154) 00:07:03.340 6856.074 - 6906.486: 75.2680% ( 118) 00:07:03.340 6906.486 - 6956.898: 75.8212% ( 97) 00:07:03.340 6956.898 - 7007.311: 76.2203% ( 70) 00:07:03.340 7007.311 - 7057.723: 76.6195% ( 70) 00:07:03.340 7057.723 - 7108.135: 77.0130% ( 69) 00:07:03.340 7108.135 - 7158.548: 77.4692% ( 80) 00:07:03.340 7158.548 - 7208.960: 77.8798% ( 72) 00:07:03.340 7208.960 - 7259.372: 78.3246% ( 78) 00:07:03.340 7259.372 - 7309.785: 78.7466% ( 74) 00:07:03.340 7309.785 - 7360.197: 79.1458% ( 70) 00:07:03.340 7360.197 - 7410.609: 79.5449% ( 70) 00:07:03.340 7410.609 - 7461.022: 79.9213% ( 66) 00:07:03.340 7461.022 - 7511.434: 80.2692% ( 61) 00:07:03.340 7511.434 - 7561.846: 80.6284% ( 63) 00:07:03.340 7561.846 - 7612.258: 80.9250% ( 52) 00:07:03.340 7612.258 - 7662.671: 81.2614% ( 59) 00:07:03.340 7662.671 - 7713.083: 81.5351% ( 48) 00:07:03.340 7713.083 - 7763.495: 81.7974% ( 46) 00:07:03.340 7763.495 - 7813.908: 82.0712% ( 48) 00:07:03.340 7813.908 - 7864.320: 82.3506% ( 49) 00:07:03.340 7864.320 - 7914.732: 82.6357% ( 50) 00:07:03.340 7914.732 - 7965.145: 82.9266% ( 51) 00:07:03.340 7965.145 - 8015.557: 83.1775% ( 44) 00:07:03.340 8015.557 - 8065.969: 83.3942% ( 38) 00:07:03.340 8065.969 - 8116.382: 83.6052% ( 37) 00:07:03.340 8116.382 - 8166.794: 83.7876% ( 32) 00:07:03.340 8166.794 - 8217.206: 83.9872% ( 35) 00:07:03.340 8217.206 - 8267.618: 84.1982% ( 37) 00:07:03.340 8267.618 - 8318.031: 84.4149% ( 38) 00:07:03.340 8318.031 - 8368.443: 84.6373% ( 39) 00:07:03.340 8368.443 - 8418.855: 84.8654% ( 40) 00:07:03.340 8418.855 - 8469.268: 85.0878% ( 39) 00:07:03.340 8469.268 - 8519.680: 85.2931% ( 36) 00:07:03.340 8519.680 - 8570.092: 85.5098% ( 38) 00:07:03.340 8570.092 - 8620.505: 85.6752% ( 29) 00:07:03.340 8620.505 - 8670.917: 85.8406% ( 29) 00:07:03.340 8670.917 - 8721.329: 85.9888% ( 26) 00:07:03.340 8721.329 - 8771.742: 86.0915% ( 18) 00:07:03.340 8771.742 - 8822.154: 86.2169% ( 22) 00:07:03.340 8822.154 - 8872.566: 86.3082% ( 16) 00:07:03.340 8872.566 - 8922.978: 86.4165% ( 19) 00:07:03.340 8922.978 - 8973.391: 86.5477% ( 23) 00:07:03.340 8973.391 - 9023.803: 86.6902% ( 25) 00:07:03.340 9023.803 - 9074.215: 86.8328% ( 25) 00:07:03.340 9074.215 - 9124.628: 86.9697% ( 24) 00:07:03.340 9124.628 - 9175.040: 87.1008% ( 23) 00:07:03.340 9175.040 - 9225.452: 87.2548% ( 27) 00:07:03.340 9225.452 - 9275.865: 87.4601% ( 36) 00:07:03.340 9275.865 - 9326.277: 87.6426% ( 32) 00:07:03.340 9326.277 - 9376.689: 87.7965% ( 27) 00:07:03.340 9376.689 - 9427.102: 87.9676% ( 30) 00:07:03.340 9427.102 - 9477.514: 88.1216% ( 27) 00:07:03.340 9477.514 - 9527.926: 88.2812% ( 28) 00:07:03.340 9527.926 - 9578.338: 88.4979% ( 38) 00:07:03.340 9578.338 - 9628.751: 88.6747% ( 31) 00:07:03.340 9628.751 - 9679.163: 88.8287% ( 27) 00:07:03.340 9679.163 - 9729.575: 89.0112% ( 32) 00:07:03.340 9729.575 - 9779.988: 89.1708% ( 28) 00:07:03.340 9779.988 - 9830.400: 89.3305% ( 28) 00:07:03.340 9830.400 - 9880.812: 89.4845% ( 27) 00:07:03.340 9880.812 - 9931.225: 89.6442% ( 28) 00:07:03.340 9931.225 - 9981.637: 89.7981% ( 27) 00:07:03.340 9981.637 - 10032.049: 89.9635% ( 29) 00:07:03.340 10032.049 - 10082.462: 90.1232% ( 28) 00:07:03.340 10082.462 - 10132.874: 90.2885% ( 29) 00:07:03.340 10132.874 - 10183.286: 90.4311% ( 25) 00:07:03.340 10183.286 - 10233.698: 90.6079% ( 31) 00:07:03.340 10233.698 - 10284.111: 90.7676% ( 28) 00:07:03.340 10284.111 - 10334.523: 90.9215% ( 27) 00:07:03.340 10334.523 - 10384.935: 91.0641% ( 25) 00:07:03.340 10384.935 - 10435.348: 91.2067% ( 25) 00:07:03.340 10435.348 - 10485.760: 91.3720% ( 29) 00:07:03.340 10485.760 - 10536.172: 91.5545% ( 32) 00:07:03.340 10536.172 - 10586.585: 91.7256% ( 30) 00:07:03.340 10586.585 - 10636.997: 91.9024% ( 31) 00:07:03.340 10636.997 - 10687.409: 92.0392% ( 24) 00:07:03.340 10687.409 - 10737.822: 92.1875% ( 26) 00:07:03.340 10737.822 - 10788.234: 92.3301% ( 25) 00:07:03.340 10788.234 - 10838.646: 92.4840% ( 27) 00:07:03.340 10838.646 - 10889.058: 92.6494% ( 29) 00:07:03.340 10889.058 - 10939.471: 92.8148% ( 29) 00:07:03.340 10939.471 - 10989.883: 92.9745% ( 28) 00:07:03.340 10989.883 - 11040.295: 93.1455% ( 30) 00:07:03.340 11040.295 - 11090.708: 93.2995% ( 27) 00:07:03.340 11090.708 - 11141.120: 93.4535% ( 27) 00:07:03.340 11141.120 - 11191.532: 93.5903% ( 24) 00:07:03.340 11191.532 - 11241.945: 93.7329% ( 25) 00:07:03.340 11241.945 - 11292.357: 93.8869% ( 27) 00:07:03.340 11292.357 - 11342.769: 93.9895% ( 18) 00:07:03.340 11342.769 - 11393.182: 94.1036% ( 20) 00:07:03.340 11393.182 - 11443.594: 94.2005% ( 17) 00:07:03.340 11443.594 - 11494.006: 94.3031% ( 18) 00:07:03.340 11494.006 - 11544.418: 94.4229% ( 21) 00:07:03.340 11544.418 - 11594.831: 94.5255% ( 18) 00:07:03.340 11594.831 - 11645.243: 94.6282% ( 18) 00:07:03.340 11645.243 - 11695.655: 94.7479% ( 21) 00:07:03.340 11695.655 - 11746.068: 94.8506% ( 18) 00:07:03.340 11746.068 - 11796.480: 94.9418% ( 16) 00:07:03.340 11796.480 - 11846.892: 95.0160% ( 13) 00:07:03.340 11846.892 - 11897.305: 95.0901% ( 13) 00:07:03.340 11897.305 - 11947.717: 95.1699% ( 14) 00:07:03.340 11947.717 - 11998.129: 95.2213% ( 9) 00:07:03.340 11998.129 - 12048.542: 95.2498% ( 5) 00:07:03.340 12048.542 - 12098.954: 95.2783% ( 5) 00:07:03.340 12098.954 - 12149.366: 95.3125% ( 6) 00:07:03.340 12149.366 - 12199.778: 95.3353% ( 4) 00:07:03.340 12199.778 - 12250.191: 95.3638% ( 5) 00:07:03.340 12250.191 - 12300.603: 95.3923% ( 5) 00:07:03.340 12300.603 - 12351.015: 95.4380% ( 8) 00:07:03.340 12351.015 - 12401.428: 95.4836% ( 8) 00:07:03.340 12401.428 - 12451.840: 95.5235% ( 7) 00:07:03.340 12451.840 - 12502.252: 95.5634% ( 7) 00:07:03.340 12502.252 - 12552.665: 95.6147% ( 9) 00:07:03.340 12552.665 - 12603.077: 95.7231% ( 19) 00:07:03.340 12603.077 - 12653.489: 95.8029% ( 14) 00:07:03.340 12653.489 - 12703.902: 95.8656% ( 11) 00:07:03.340 12703.902 - 12754.314: 95.9398% ( 13) 00:07:03.340 12754.314 - 12804.726: 95.9911% ( 9) 00:07:03.340 12804.726 - 12855.138: 96.0481% ( 10) 00:07:03.340 12855.138 - 12905.551: 96.1052% ( 10) 00:07:03.340 12905.551 - 13006.375: 96.2363% ( 23) 00:07:03.340 13006.375 - 13107.200: 96.4074% ( 30) 00:07:03.340 13107.200 - 13208.025: 96.5956% ( 33) 00:07:03.340 13208.025 - 13308.849: 96.7952% ( 35) 00:07:03.340 13308.849 - 13409.674: 96.9833% ( 33) 00:07:03.340 13409.674 - 13510.498: 97.1886% ( 36) 00:07:03.340 13510.498 - 13611.323: 97.3540% ( 29) 00:07:03.340 13611.323 - 13712.148: 97.5479% ( 34) 00:07:03.340 13712.148 - 13812.972: 97.6905% ( 25) 00:07:03.340 13812.972 - 13913.797: 97.8216% ( 23) 00:07:03.340 13913.797 - 14014.622: 97.9414% ( 21) 00:07:03.340 14014.622 - 14115.446: 98.0269% ( 15) 00:07:03.340 14115.446 - 14216.271: 98.1239% ( 17) 00:07:03.340 14216.271 - 14317.095: 98.1923% ( 12) 00:07:03.340 14317.095 - 14417.920: 98.2835% ( 16) 00:07:03.340 14417.920 - 14518.745: 98.3748% ( 16) 00:07:03.340 14518.745 - 14619.569: 98.4603% ( 15) 00:07:03.340 14619.569 - 14720.394: 98.5401% ( 14) 00:07:03.340 14720.394 - 14821.218: 98.5972% ( 10) 00:07:03.340 14821.218 - 14922.043: 98.6143% ( 3) 00:07:03.340 14922.043 - 15022.868: 98.6200% ( 1) 00:07:03.341 15022.868 - 15123.692: 98.6371% ( 3) 00:07:03.341 15123.692 - 15224.517: 98.6542% ( 3) 00:07:03.341 15224.517 - 15325.342: 98.6713% ( 3) 00:07:03.341 15325.342 - 15426.166: 98.6941% ( 4) 00:07:03.341 15426.166 - 15526.991: 98.7055% ( 2) 00:07:03.341 15526.991 - 15627.815: 98.7226% ( 3) 00:07:03.341 15627.815 - 15728.640: 98.7397% ( 3) 00:07:03.341 15728.640 - 15829.465: 98.7625% ( 4) 00:07:03.341 15829.465 - 15930.289: 98.7740% ( 2) 00:07:03.341 15930.289 - 16031.114: 98.7911% ( 3) 00:07:03.341 16031.114 - 16131.938: 98.8367% ( 8) 00:07:03.341 16131.938 - 16232.763: 98.8880% ( 9) 00:07:03.341 16232.763 - 16333.588: 98.9336% ( 8) 00:07:03.341 16333.588 - 16434.412: 98.9792% ( 8) 00:07:03.341 16434.412 - 16535.237: 99.0135% ( 6) 00:07:03.341 16535.237 - 16636.062: 99.0648% ( 9) 00:07:03.341 16636.062 - 16736.886: 99.0990% ( 6) 00:07:03.341 16736.886 - 16837.711: 99.1275% ( 5) 00:07:03.341 16837.711 - 16938.535: 99.1560% ( 5) 00:07:03.341 16938.535 - 17039.360: 99.1845% ( 5) 00:07:03.341 17039.360 - 17140.185: 99.2073% ( 4) 00:07:03.341 17140.185 - 17241.009: 99.2359% ( 5) 00:07:03.341 17241.009 - 17341.834: 99.2587% ( 4) 00:07:03.341 17341.834 - 17442.658: 99.2701% ( 2) 00:07:03.341 29037.489 - 29239.138: 99.3043% ( 6) 00:07:03.341 29239.138 - 29440.788: 99.3499% ( 8) 00:07:03.341 29440.788 - 29642.437: 99.3955% ( 8) 00:07:03.341 29642.437 - 29844.086: 99.4469% ( 9) 00:07:03.341 29844.086 - 30045.735: 99.4925% ( 8) 00:07:03.341 30045.735 - 30247.385: 99.5438% ( 9) 00:07:03.341 30247.385 - 30449.034: 99.5894% ( 8) 00:07:03.341 30449.034 - 30650.683: 99.6350% ( 8) 00:07:03.341 36901.809 - 37103.458: 99.6407% ( 1) 00:07:03.341 37103.458 - 37305.108: 99.6750% ( 6) 00:07:03.341 37305.108 - 37506.757: 99.7149% ( 7) 00:07:03.341 37506.757 - 37708.406: 99.7548% ( 7) 00:07:03.341 37708.406 - 37910.055: 99.7947% ( 7) 00:07:03.341 37910.055 - 38111.705: 99.8346% ( 7) 00:07:03.341 38111.705 - 38313.354: 99.8688% ( 6) 00:07:03.341 38313.354 - 38515.003: 99.9088% ( 7) 00:07:03.341 38515.003 - 38716.652: 99.9487% ( 7) 00:07:03.341 38716.652 - 38918.302: 99.9886% ( 7) 00:07:03.341 38918.302 - 39119.951: 100.0000% ( 2) 00:07:03.341 00:07:03.341 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:03.341 ============================================================================== 00:07:03.341 Range in us Cumulative IO count 00:07:03.341 5646.178 - 5671.385: 0.0228% ( 4) 00:07:03.341 5671.385 - 5696.591: 0.0456% ( 4) 00:07:03.341 5696.591 - 5721.797: 0.0798% ( 6) 00:07:03.341 5721.797 - 5747.003: 0.1825% ( 18) 00:07:03.341 5747.003 - 5772.209: 0.3764% ( 34) 00:07:03.341 5772.209 - 5797.415: 0.7755% ( 70) 00:07:03.341 5797.415 - 5822.622: 1.4656% ( 121) 00:07:03.341 5822.622 - 5847.828: 2.5719% ( 194) 00:07:03.341 5847.828 - 5873.034: 3.8834% ( 230) 00:07:03.341 5873.034 - 5898.240: 5.4802% ( 280) 00:07:03.341 5898.240 - 5923.446: 7.0769% ( 280) 00:07:03.341 5923.446 - 5948.652: 8.7762% ( 298) 00:07:03.341 5948.652 - 5973.858: 10.7208% ( 341) 00:07:03.341 5973.858 - 5999.065: 12.7110% ( 349) 00:07:03.341 5999.065 - 6024.271: 14.7639% ( 360) 00:07:03.341 6024.271 - 6049.477: 16.8682% ( 369) 00:07:03.341 6049.477 - 6074.683: 19.0351% ( 380) 00:07:03.341 6074.683 - 6099.889: 21.1109% ( 364) 00:07:03.341 6099.889 - 6125.095: 23.3805% ( 398) 00:07:03.341 6125.095 - 6150.302: 25.5760% ( 385) 00:07:03.341 6150.302 - 6175.508: 27.7657% ( 384) 00:07:03.341 6175.508 - 6200.714: 29.9441% ( 382) 00:07:03.341 6200.714 - 6225.920: 32.0769% ( 374) 00:07:03.341 6225.920 - 6251.126: 34.2153% ( 375) 00:07:03.341 6251.126 - 6276.332: 36.3823% ( 380) 00:07:03.341 6276.332 - 6301.538: 38.5664% ( 383) 00:07:03.341 6301.538 - 6326.745: 40.7448% ( 382) 00:07:03.341 6326.745 - 6351.951: 42.9516% ( 387) 00:07:03.341 6351.951 - 6377.157: 45.1129% ( 379) 00:07:03.341 6377.157 - 6402.363: 47.3825% ( 398) 00:07:03.341 6402.363 - 6427.569: 49.6179% ( 392) 00:07:03.341 6427.569 - 6452.775: 51.8077% ( 384) 00:07:03.341 6452.775 - 6503.188: 56.2443% ( 778) 00:07:03.341 6503.188 - 6553.600: 60.5839% ( 761) 00:07:03.341 6553.600 - 6604.012: 64.7012% ( 722) 00:07:03.341 6604.012 - 6654.425: 68.3736% ( 644) 00:07:03.341 6654.425 - 6704.837: 71.1166% ( 481) 00:07:03.341 6704.837 - 6755.249: 72.8844% ( 310) 00:07:03.341 6755.249 - 6805.662: 74.0420% ( 203) 00:07:03.341 6805.662 - 6856.074: 74.9145% ( 153) 00:07:03.341 6856.074 - 6906.486: 75.5531% ( 112) 00:07:03.341 6906.486 - 6956.898: 76.1519% ( 105) 00:07:03.341 6956.898 - 7007.311: 76.6138% ( 81) 00:07:03.341 7007.311 - 7057.723: 77.0586% ( 78) 00:07:03.341 7057.723 - 7108.135: 77.4692% ( 72) 00:07:03.341 7108.135 - 7158.548: 77.9425% ( 83) 00:07:03.341 7158.548 - 7208.960: 78.3873% ( 78) 00:07:03.341 7208.960 - 7259.372: 78.7466% ( 63) 00:07:03.341 7259.372 - 7309.785: 79.1286% ( 67) 00:07:03.341 7309.785 - 7360.197: 79.4879% ( 63) 00:07:03.341 7360.197 - 7410.609: 79.9156% ( 75) 00:07:03.341 7410.609 - 7461.022: 80.2977% ( 67) 00:07:03.341 7461.022 - 7511.434: 80.6569% ( 63) 00:07:03.341 7511.434 - 7561.846: 80.9991% ( 60) 00:07:03.341 7561.846 - 7612.258: 81.3355% ( 59) 00:07:03.341 7612.258 - 7662.671: 81.6549% ( 56) 00:07:03.341 7662.671 - 7713.083: 81.9457% ( 51) 00:07:03.341 7713.083 - 7763.495: 82.2137% ( 47) 00:07:03.341 7763.495 - 7813.908: 82.4532% ( 42) 00:07:03.341 7813.908 - 7864.320: 82.6813% ( 40) 00:07:03.341 7864.320 - 7914.732: 82.8581% ( 31) 00:07:03.341 7914.732 - 7965.145: 83.0235% ( 29) 00:07:03.341 7965.145 - 8015.557: 83.1661% ( 25) 00:07:03.341 8015.557 - 8065.969: 83.3029% ( 24) 00:07:03.341 8065.969 - 8116.382: 83.4626% ( 28) 00:07:03.341 8116.382 - 8166.794: 83.5995% ( 24) 00:07:03.341 8166.794 - 8217.206: 83.7477% ( 26) 00:07:03.341 8217.206 - 8267.618: 83.8732% ( 22) 00:07:03.341 8267.618 - 8318.031: 84.0100% ( 24) 00:07:03.341 8318.031 - 8368.443: 84.1526% ( 25) 00:07:03.341 8368.443 - 8418.855: 84.3465% ( 34) 00:07:03.341 8418.855 - 8469.268: 84.5461% ( 35) 00:07:03.341 8469.268 - 8519.680: 84.7286% ( 32) 00:07:03.341 8519.680 - 8570.092: 84.9167% ( 33) 00:07:03.341 8570.092 - 8620.505: 85.1106% ( 34) 00:07:03.341 8620.505 - 8670.917: 85.2760% ( 29) 00:07:03.341 8670.917 - 8721.329: 85.4756% ( 35) 00:07:03.341 8721.329 - 8771.742: 85.6980% ( 39) 00:07:03.341 8771.742 - 8822.154: 85.9204% ( 39) 00:07:03.341 8822.154 - 8872.566: 86.1257% ( 36) 00:07:03.341 8872.566 - 8922.978: 86.3310% ( 36) 00:07:03.341 8922.978 - 8973.391: 86.5192% ( 33) 00:07:03.341 8973.391 - 9023.803: 86.7016% ( 32) 00:07:03.341 9023.803 - 9074.215: 86.9012% ( 35) 00:07:03.341 9074.215 - 9124.628: 87.1065% ( 36) 00:07:03.341 9124.628 - 9175.040: 87.3232% ( 38) 00:07:03.341 9175.040 - 9225.452: 87.5228% ( 35) 00:07:03.341 9225.452 - 9275.865: 87.7110% ( 33) 00:07:03.341 9275.865 - 9326.277: 87.9163% ( 36) 00:07:03.341 9326.277 - 9376.689: 88.1273% ( 37) 00:07:03.341 9376.689 - 9427.102: 88.3212% ( 34) 00:07:03.341 9427.102 - 9477.514: 88.5094% ( 33) 00:07:03.341 9477.514 - 9527.926: 88.6747% ( 29) 00:07:03.341 9527.926 - 9578.338: 88.8116% ( 24) 00:07:03.341 9578.338 - 9628.751: 88.9370% ( 22) 00:07:03.341 9628.751 - 9679.163: 89.0568% ( 21) 00:07:03.341 9679.163 - 9729.575: 89.1766% ( 21) 00:07:03.341 9729.575 - 9779.988: 89.2792% ( 18) 00:07:03.341 9779.988 - 9830.400: 89.3932% ( 20) 00:07:03.341 9830.400 - 9880.812: 89.5244% ( 23) 00:07:03.341 9880.812 - 9931.225: 89.6613% ( 24) 00:07:03.341 9931.225 - 9981.637: 89.7867% ( 22) 00:07:03.341 9981.637 - 10032.049: 89.9350% ( 26) 00:07:03.341 10032.049 - 10082.462: 90.0719% ( 24) 00:07:03.341 10082.462 - 10132.874: 90.2144% ( 25) 00:07:03.341 10132.874 - 10183.286: 90.3456% ( 23) 00:07:03.341 10183.286 - 10233.698: 90.4767% ( 23) 00:07:03.341 10233.698 - 10284.111: 90.5908% ( 20) 00:07:03.341 10284.111 - 10334.523: 90.7333% ( 25) 00:07:03.341 10334.523 - 10384.935: 90.8816% ( 26) 00:07:03.341 10384.935 - 10435.348: 91.0869% ( 36) 00:07:03.341 10435.348 - 10485.760: 91.2922% ( 36) 00:07:03.341 10485.760 - 10536.172: 91.4747% ( 32) 00:07:03.341 10536.172 - 10586.585: 91.6515% ( 31) 00:07:03.341 10586.585 - 10636.997: 91.8282% ( 31) 00:07:03.341 10636.997 - 10687.409: 91.9936% ( 29) 00:07:03.341 10687.409 - 10737.822: 92.1818% ( 33) 00:07:03.341 10737.822 - 10788.234: 92.3358% ( 27) 00:07:03.341 10788.234 - 10838.646: 92.4954% ( 28) 00:07:03.341 10838.646 - 10889.058: 92.6437% ( 26) 00:07:03.341 10889.058 - 10939.471: 92.8034% ( 28) 00:07:03.341 10939.471 - 10989.883: 92.9688% ( 29) 00:07:03.341 10989.883 - 11040.295: 93.1113% ( 25) 00:07:03.341 11040.295 - 11090.708: 93.2653% ( 27) 00:07:03.341 11090.708 - 11141.120: 93.4364% ( 30) 00:07:03.341 11141.120 - 11191.532: 93.5732% ( 24) 00:07:03.341 11191.532 - 11241.945: 93.7158% ( 25) 00:07:03.342 11241.945 - 11292.357: 93.8355% ( 21) 00:07:03.342 11292.357 - 11342.769: 93.9667% ( 23) 00:07:03.342 11342.769 - 11393.182: 94.0865% ( 21) 00:07:03.342 11393.182 - 11443.594: 94.2176% ( 23) 00:07:03.342 11443.594 - 11494.006: 94.3431% ( 22) 00:07:03.342 11494.006 - 11544.418: 94.4742% ( 23) 00:07:03.342 11544.418 - 11594.831: 94.5712% ( 17) 00:07:03.342 11594.831 - 11645.243: 94.6681% ( 17) 00:07:03.342 11645.243 - 11695.655: 94.7594% ( 16) 00:07:03.342 11695.655 - 11746.068: 94.8335% ( 13) 00:07:03.342 11746.068 - 11796.480: 94.8962% ( 11) 00:07:03.342 11796.480 - 11846.892: 94.9475% ( 9) 00:07:03.342 11846.892 - 11897.305: 95.0331% ( 15) 00:07:03.342 11897.305 - 11947.717: 95.1414% ( 19) 00:07:03.342 11947.717 - 11998.129: 95.2384% ( 17) 00:07:03.342 11998.129 - 12048.542: 95.3182% ( 14) 00:07:03.342 12048.542 - 12098.954: 95.3923% ( 13) 00:07:03.342 12098.954 - 12149.366: 95.4608% ( 12) 00:07:03.342 12149.366 - 12199.778: 95.5292% ( 12) 00:07:03.342 12199.778 - 12250.191: 95.5862% ( 10) 00:07:03.342 12250.191 - 12300.603: 95.6375% ( 9) 00:07:03.342 12300.603 - 12351.015: 95.7060% ( 12) 00:07:03.342 12351.015 - 12401.428: 95.7858% ( 14) 00:07:03.342 12401.428 - 12451.840: 95.8428% ( 10) 00:07:03.342 12451.840 - 12502.252: 95.8999% ( 10) 00:07:03.342 12502.252 - 12552.665: 95.9797% ( 14) 00:07:03.342 12552.665 - 12603.077: 96.0538% ( 13) 00:07:03.342 12603.077 - 12653.489: 96.1337% ( 14) 00:07:03.342 12653.489 - 12703.902: 96.2192% ( 15) 00:07:03.342 12703.902 - 12754.314: 96.2762% ( 10) 00:07:03.342 12754.314 - 12804.726: 96.3276% ( 9) 00:07:03.342 12804.726 - 12855.138: 96.3846% ( 10) 00:07:03.342 12855.138 - 12905.551: 96.4245% ( 7) 00:07:03.342 12905.551 - 13006.375: 96.5328% ( 19) 00:07:03.342 13006.375 - 13107.200: 96.6526% ( 21) 00:07:03.342 13107.200 - 13208.025: 96.7895% ( 24) 00:07:03.342 13208.025 - 13308.849: 96.9491% ( 28) 00:07:03.342 13308.849 - 13409.674: 97.1031% ( 27) 00:07:03.342 13409.674 - 13510.498: 97.2571% ( 27) 00:07:03.342 13510.498 - 13611.323: 97.4110% ( 27) 00:07:03.342 13611.323 - 13712.148: 97.5650% ( 27) 00:07:03.342 13712.148 - 13812.972: 97.6905% ( 22) 00:07:03.342 13812.972 - 13913.797: 97.8273% ( 24) 00:07:03.342 13913.797 - 14014.622: 97.9528% ( 22) 00:07:03.342 14014.622 - 14115.446: 98.0383% ( 15) 00:07:03.342 14115.446 - 14216.271: 98.1182% ( 14) 00:07:03.342 14216.271 - 14317.095: 98.1923% ( 13) 00:07:03.342 14317.095 - 14417.920: 98.2664% ( 13) 00:07:03.342 14417.920 - 14518.745: 98.3463% ( 14) 00:07:03.342 14518.745 - 14619.569: 98.4375% ( 16) 00:07:03.342 14619.569 - 14720.394: 98.4831% ( 8) 00:07:03.342 14720.394 - 14821.218: 98.5002% ( 3) 00:07:03.342 14821.218 - 14922.043: 98.5173% ( 3) 00:07:03.342 14922.043 - 15022.868: 98.5344% ( 3) 00:07:03.342 15022.868 - 15123.692: 98.5401% ( 1) 00:07:03.342 15123.692 - 15224.517: 98.5516% ( 2) 00:07:03.342 15224.517 - 15325.342: 98.5858% ( 6) 00:07:03.342 15325.342 - 15426.166: 98.6257% ( 7) 00:07:03.342 15426.166 - 15526.991: 98.6485% ( 4) 00:07:03.342 15526.991 - 15627.815: 98.6770% ( 5) 00:07:03.342 15627.815 - 15728.640: 98.7169% ( 7) 00:07:03.342 15728.640 - 15829.465: 98.7511% ( 6) 00:07:03.342 15829.465 - 15930.289: 98.7911% ( 7) 00:07:03.342 15930.289 - 16031.114: 98.8253% ( 6) 00:07:03.342 16031.114 - 16131.938: 98.8538% ( 5) 00:07:03.342 16131.938 - 16232.763: 98.8994% ( 8) 00:07:03.342 16232.763 - 16333.588: 98.9393% ( 7) 00:07:03.342 16333.588 - 16434.412: 98.9792% ( 7) 00:07:03.342 16434.412 - 16535.237: 99.0192% ( 7) 00:07:03.342 16535.237 - 16636.062: 99.0477% ( 5) 00:07:03.342 16636.062 - 16736.886: 99.0933% ( 8) 00:07:03.342 16736.886 - 16837.711: 99.1275% ( 6) 00:07:03.342 16837.711 - 16938.535: 99.1674% ( 7) 00:07:03.342 16938.535 - 17039.360: 99.2016% ( 6) 00:07:03.342 17039.360 - 17140.185: 99.2359% ( 6) 00:07:03.342 17140.185 - 17241.009: 99.2473% ( 2) 00:07:03.342 17241.009 - 17341.834: 99.2587% ( 2) 00:07:03.342 17341.834 - 17442.658: 99.2701% ( 2) 00:07:03.342 28230.892 - 28432.542: 99.2758% ( 1) 00:07:03.342 28432.542 - 28634.191: 99.3043% ( 5) 00:07:03.342 28634.191 - 28835.840: 99.3442% ( 7) 00:07:03.342 28835.840 - 29037.489: 99.3898% ( 8) 00:07:03.342 29037.489 - 29239.138: 99.4297% ( 7) 00:07:03.342 29239.138 - 29440.788: 99.4754% ( 8) 00:07:03.342 29440.788 - 29642.437: 99.5210% ( 8) 00:07:03.342 29642.437 - 29844.086: 99.5609% ( 7) 00:07:03.342 29844.086 - 30045.735: 99.6065% ( 8) 00:07:03.342 30045.735 - 30247.385: 99.6350% ( 5) 00:07:03.342 36498.511 - 36700.160: 99.6693% ( 6) 00:07:03.342 36700.160 - 36901.809: 99.7092% ( 7) 00:07:03.342 36901.809 - 37103.458: 99.7491% ( 7) 00:07:03.342 37103.458 - 37305.108: 99.7890% ( 7) 00:07:03.342 37305.108 - 37506.757: 99.8289% ( 7) 00:07:03.342 37506.757 - 37708.406: 99.8631% ( 6) 00:07:03.342 37708.406 - 37910.055: 99.9031% ( 7) 00:07:03.342 37910.055 - 38111.705: 99.9430% ( 7) 00:07:03.342 38111.705 - 38313.354: 99.9829% ( 7) 00:07:03.342 38313.354 - 38515.003: 100.0000% ( 3) 00:07:03.342 00:07:03.342 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:03.342 ============================================================================== 00:07:03.342 Range in us Cumulative IO count 00:07:03.342 5620.972 - 5646.178: 0.0342% ( 6) 00:07:03.342 5646.178 - 5671.385: 0.0456% ( 2) 00:07:03.342 5671.385 - 5696.591: 0.0798% ( 6) 00:07:03.342 5696.591 - 5721.797: 0.1426% ( 11) 00:07:03.342 5721.797 - 5747.003: 0.2395% ( 17) 00:07:03.342 5747.003 - 5772.209: 0.4904% ( 44) 00:07:03.342 5772.209 - 5797.415: 0.8269% ( 59) 00:07:03.342 5797.415 - 5822.622: 1.5055% ( 119) 00:07:03.342 5822.622 - 5847.828: 2.5319% ( 180) 00:07:03.342 5847.828 - 5873.034: 3.9062% ( 241) 00:07:03.342 5873.034 - 5898.240: 5.3832% ( 259) 00:07:03.342 5898.240 - 5923.446: 6.9400% ( 273) 00:07:03.342 5923.446 - 5948.652: 8.7078% ( 310) 00:07:03.342 5948.652 - 5973.858: 10.6809% ( 346) 00:07:03.342 5973.858 - 5999.065: 12.7623% ( 365) 00:07:03.342 5999.065 - 6024.271: 14.9122% ( 377) 00:07:03.342 6024.271 - 6049.477: 16.9879% ( 364) 00:07:03.342 6049.477 - 6074.683: 19.1606% ( 381) 00:07:03.342 6074.683 - 6099.889: 21.3390% ( 382) 00:07:03.342 6099.889 - 6125.095: 23.4204% ( 365) 00:07:03.342 6125.095 - 6150.302: 25.6045% ( 383) 00:07:03.342 6150.302 - 6175.508: 27.8171% ( 388) 00:07:03.342 6175.508 - 6200.714: 30.0240% ( 387) 00:07:03.342 6200.714 - 6225.920: 32.2251% ( 386) 00:07:03.342 6225.920 - 6251.126: 34.4092% ( 383) 00:07:03.342 6251.126 - 6276.332: 36.5762% ( 380) 00:07:03.342 6276.332 - 6301.538: 38.7318% ( 378) 00:07:03.342 6301.538 - 6326.745: 40.9158% ( 383) 00:07:03.342 6326.745 - 6351.951: 43.1512% ( 392) 00:07:03.342 6351.951 - 6377.157: 45.3182% ( 380) 00:07:03.342 6377.157 - 6402.363: 47.5137% ( 385) 00:07:03.342 6402.363 - 6427.569: 49.6807% ( 380) 00:07:03.342 6427.569 - 6452.775: 51.8362% ( 378) 00:07:03.342 6452.775 - 6503.188: 56.1930% ( 764) 00:07:03.342 6503.188 - 6553.600: 60.5554% ( 765) 00:07:03.342 6553.600 - 6604.012: 64.8209% ( 748) 00:07:03.342 6604.012 - 6654.425: 68.5846% ( 660) 00:07:03.342 6654.425 - 6704.837: 71.2819% ( 473) 00:07:03.342 6704.837 - 6755.249: 73.0782% ( 315) 00:07:03.342 6755.249 - 6805.662: 74.3499% ( 223) 00:07:03.342 6805.662 - 6856.074: 75.1369% ( 138) 00:07:03.342 6856.074 - 6906.486: 75.7413% ( 106) 00:07:03.342 6906.486 - 6956.898: 76.2660% ( 92) 00:07:03.342 6956.898 - 7007.311: 76.6880% ( 74) 00:07:03.342 7007.311 - 7057.723: 77.1556% ( 82) 00:07:03.342 7057.723 - 7108.135: 77.6004% ( 78) 00:07:03.342 7108.135 - 7158.548: 78.0338% ( 76) 00:07:03.342 7158.548 - 7208.960: 78.4158% ( 67) 00:07:03.342 7208.960 - 7259.372: 78.7808% ( 64) 00:07:03.342 7259.372 - 7309.785: 79.1515% ( 65) 00:07:03.342 7309.785 - 7360.197: 79.5164% ( 64) 00:07:03.342 7360.197 - 7410.609: 79.8928% ( 66) 00:07:03.342 7410.609 - 7461.022: 80.2521% ( 63) 00:07:03.342 7461.022 - 7511.434: 80.5372% ( 50) 00:07:03.342 7511.434 - 7561.846: 80.8280% ( 51) 00:07:03.342 7561.846 - 7612.258: 81.1074% ( 49) 00:07:03.342 7612.258 - 7662.671: 81.3983% ( 51) 00:07:03.342 7662.671 - 7713.083: 81.7062% ( 54) 00:07:03.342 7713.083 - 7763.495: 81.9400% ( 41) 00:07:03.342 7763.495 - 7813.908: 82.1624% ( 39) 00:07:03.342 7813.908 - 7864.320: 82.4019% ( 42) 00:07:03.342 7864.320 - 7914.732: 82.5958% ( 34) 00:07:03.342 7914.732 - 7965.145: 82.7726% ( 31) 00:07:03.342 7965.145 - 8015.557: 82.9208% ( 26) 00:07:03.343 8015.557 - 8065.969: 83.1204% ( 35) 00:07:03.343 8065.969 - 8116.382: 83.2915% ( 30) 00:07:03.343 8116.382 - 8166.794: 83.4683% ( 31) 00:07:03.343 8166.794 - 8217.206: 83.6793% ( 37) 00:07:03.343 8217.206 - 8267.618: 83.9074% ( 40) 00:07:03.343 8267.618 - 8318.031: 84.1241% ( 38) 00:07:03.343 8318.031 - 8368.443: 84.3351% ( 37) 00:07:03.343 8368.443 - 8418.855: 84.5233% ( 33) 00:07:03.343 8418.855 - 8469.268: 84.7172% ( 34) 00:07:03.343 8469.268 - 8519.680: 84.9339% ( 38) 00:07:03.343 8519.680 - 8570.092: 85.1391% ( 36) 00:07:03.343 8570.092 - 8620.505: 85.2988% ( 28) 00:07:03.343 8620.505 - 8670.917: 85.4870% ( 33) 00:07:03.343 8670.917 - 8721.329: 85.6752% ( 33) 00:07:03.343 8721.329 - 8771.742: 85.8520% ( 31) 00:07:03.343 8771.742 - 8822.154: 86.0344% ( 32) 00:07:03.343 8822.154 - 8872.566: 86.2283% ( 34) 00:07:03.343 8872.566 - 8922.978: 86.4165% ( 33) 00:07:03.343 8922.978 - 8973.391: 86.5819% ( 29) 00:07:03.343 8973.391 - 9023.803: 86.7701% ( 33) 00:07:03.343 9023.803 - 9074.215: 86.9126% ( 25) 00:07:03.343 9074.215 - 9124.628: 87.0438% ( 23) 00:07:03.343 9124.628 - 9175.040: 87.1921% ( 26) 00:07:03.343 9175.040 - 9225.452: 87.3289% ( 24) 00:07:03.343 9225.452 - 9275.865: 87.5114% ( 32) 00:07:03.343 9275.865 - 9326.277: 87.6768% ( 29) 00:07:03.343 9326.277 - 9376.689: 87.8536% ( 31) 00:07:03.343 9376.689 - 9427.102: 88.0303% ( 31) 00:07:03.343 9427.102 - 9477.514: 88.2356% ( 36) 00:07:03.343 9477.514 - 9527.926: 88.4238% ( 33) 00:07:03.343 9527.926 - 9578.338: 88.6006% ( 31) 00:07:03.343 9578.338 - 9628.751: 88.7888% ( 33) 00:07:03.343 9628.751 - 9679.163: 88.9656% ( 31) 00:07:03.343 9679.163 - 9729.575: 89.1195% ( 27) 00:07:03.343 9729.575 - 9779.988: 89.2792% ( 28) 00:07:03.343 9779.988 - 9830.400: 89.4560% ( 31) 00:07:03.343 9830.400 - 9880.812: 89.5928% ( 24) 00:07:03.343 9880.812 - 9931.225: 89.7525% ( 28) 00:07:03.343 9931.225 - 9981.637: 89.9008% ( 26) 00:07:03.343 9981.637 - 10032.049: 90.0547% ( 27) 00:07:03.343 10032.049 - 10082.462: 90.1973% ( 25) 00:07:03.343 10082.462 - 10132.874: 90.3627% ( 29) 00:07:03.343 10132.874 - 10183.286: 90.5109% ( 26) 00:07:03.343 10183.286 - 10233.698: 90.6592% ( 26) 00:07:03.343 10233.698 - 10284.111: 90.7847% ( 22) 00:07:03.343 10284.111 - 10334.523: 90.9158% ( 23) 00:07:03.343 10334.523 - 10384.935: 91.0356% ( 21) 00:07:03.343 10384.935 - 10435.348: 91.2067% ( 30) 00:07:03.343 10435.348 - 10485.760: 91.3435% ( 24) 00:07:03.343 10485.760 - 10536.172: 91.5146% ( 30) 00:07:03.343 10536.172 - 10586.585: 91.6629% ( 26) 00:07:03.343 10586.585 - 10636.997: 91.8225% ( 28) 00:07:03.343 10636.997 - 10687.409: 91.9936% ( 30) 00:07:03.343 10687.409 - 10737.822: 92.1818% ( 33) 00:07:03.343 10737.822 - 10788.234: 92.3700% ( 33) 00:07:03.343 10788.234 - 10838.646: 92.5639% ( 34) 00:07:03.343 10838.646 - 10889.058: 92.7235% ( 28) 00:07:03.343 10889.058 - 10939.471: 92.9117% ( 33) 00:07:03.343 10939.471 - 10989.883: 93.0999% ( 33) 00:07:03.343 10989.883 - 11040.295: 93.2824% ( 32) 00:07:03.343 11040.295 - 11090.708: 93.4592% ( 31) 00:07:03.343 11090.708 - 11141.120: 93.6131% ( 27) 00:07:03.343 11141.120 - 11191.532: 93.7671% ( 27) 00:07:03.343 11191.532 - 11241.945: 93.9211% ( 27) 00:07:03.343 11241.945 - 11292.357: 94.0750% ( 27) 00:07:03.343 11292.357 - 11342.769: 94.2290% ( 27) 00:07:03.343 11342.769 - 11393.182: 94.3488% ( 21) 00:07:03.343 11393.182 - 11443.594: 94.4685% ( 21) 00:07:03.343 11443.594 - 11494.006: 94.5997% ( 23) 00:07:03.343 11494.006 - 11544.418: 94.7137% ( 20) 00:07:03.343 11544.418 - 11594.831: 94.8164% ( 18) 00:07:03.343 11594.831 - 11645.243: 94.8905% ( 13) 00:07:03.343 11645.243 - 11695.655: 94.9418% ( 9) 00:07:03.343 11695.655 - 11746.068: 94.9989% ( 10) 00:07:03.343 11746.068 - 11796.480: 95.0730% ( 13) 00:07:03.343 11796.480 - 11846.892: 95.1528% ( 14) 00:07:03.343 11846.892 - 11897.305: 95.2156% ( 11) 00:07:03.343 11897.305 - 11947.717: 95.2726% ( 10) 00:07:03.343 11947.717 - 11998.129: 95.3182% ( 8) 00:07:03.343 11998.129 - 12048.542: 95.3809% ( 11) 00:07:03.343 12048.542 - 12098.954: 95.4437% ( 11) 00:07:03.343 12098.954 - 12149.366: 95.5178% ( 13) 00:07:03.343 12149.366 - 12199.778: 95.5748% ( 10) 00:07:03.343 12199.778 - 12250.191: 95.6261% ( 9) 00:07:03.343 12250.191 - 12300.603: 95.6718% ( 8) 00:07:03.343 12300.603 - 12351.015: 95.7288% ( 10) 00:07:03.343 12351.015 - 12401.428: 95.7744% ( 8) 00:07:03.343 12401.428 - 12451.840: 95.8143% ( 7) 00:07:03.343 12451.840 - 12502.252: 95.8771% ( 11) 00:07:03.343 12502.252 - 12552.665: 95.9455% ( 12) 00:07:03.343 12552.665 - 12603.077: 96.0481% ( 18) 00:07:03.343 12603.077 - 12653.489: 96.1109% ( 11) 00:07:03.343 12653.489 - 12703.902: 96.1679% ( 10) 00:07:03.343 12703.902 - 12754.314: 96.2306% ( 11) 00:07:03.343 12754.314 - 12804.726: 96.2876% ( 10) 00:07:03.343 12804.726 - 12855.138: 96.3561% ( 12) 00:07:03.343 12855.138 - 12905.551: 96.4188% ( 11) 00:07:03.343 12905.551 - 13006.375: 96.5557% ( 24) 00:07:03.343 13006.375 - 13107.200: 96.7781% ( 39) 00:07:03.343 13107.200 - 13208.025: 96.9833% ( 36) 00:07:03.343 13208.025 - 13308.849: 97.2229% ( 42) 00:07:03.343 13308.849 - 13409.674: 97.4396% ( 38) 00:07:03.343 13409.674 - 13510.498: 97.6220% ( 32) 00:07:03.343 13510.498 - 13611.323: 97.7760% ( 27) 00:07:03.343 13611.323 - 13712.148: 97.9300% ( 27) 00:07:03.343 13712.148 - 13812.972: 98.0668% ( 24) 00:07:03.343 13812.972 - 13913.797: 98.1866% ( 21) 00:07:03.343 13913.797 - 14014.622: 98.2892% ( 18) 00:07:03.343 14014.622 - 14115.446: 98.3862% ( 17) 00:07:03.343 14115.446 - 14216.271: 98.4375% ( 9) 00:07:03.343 14216.271 - 14317.095: 98.4603% ( 4) 00:07:03.343 14317.095 - 14417.920: 98.4831% ( 4) 00:07:03.343 14417.920 - 14518.745: 98.5059% ( 4) 00:07:03.343 14518.745 - 14619.569: 98.5287% ( 4) 00:07:03.343 14619.569 - 14720.394: 98.5401% ( 2) 00:07:03.343 14821.218 - 14922.043: 98.5801% ( 7) 00:07:03.343 14922.043 - 15022.868: 98.5972% ( 3) 00:07:03.343 15022.868 - 15123.692: 98.6029% ( 1) 00:07:03.343 15123.692 - 15224.517: 98.6143% ( 2) 00:07:03.343 15224.517 - 15325.342: 98.6257% ( 2) 00:07:03.343 15325.342 - 15426.166: 98.6485% ( 4) 00:07:03.343 15426.166 - 15526.991: 98.6713% ( 4) 00:07:03.343 15526.991 - 15627.815: 98.6884% ( 3) 00:07:03.343 15627.815 - 15728.640: 98.7055% ( 3) 00:07:03.343 15728.640 - 15829.465: 98.7226% ( 3) 00:07:03.343 15829.465 - 15930.289: 98.7397% ( 3) 00:07:03.343 15930.289 - 16031.114: 98.7568% ( 3) 00:07:03.343 16031.114 - 16131.938: 98.7911% ( 6) 00:07:03.343 16131.938 - 16232.763: 98.8310% ( 7) 00:07:03.343 16232.763 - 16333.588: 98.8652% ( 6) 00:07:03.343 16333.588 - 16434.412: 98.9051% ( 7) 00:07:03.343 16434.412 - 16535.237: 98.9279% ( 4) 00:07:03.344 16535.237 - 16636.062: 98.9621% ( 6) 00:07:03.344 16636.062 - 16736.886: 98.9964% ( 6) 00:07:03.344 16736.886 - 16837.711: 99.0363% ( 7) 00:07:03.344 16837.711 - 16938.535: 99.0762% ( 7) 00:07:03.344 16938.535 - 17039.360: 99.1104% ( 6) 00:07:03.344 17039.360 - 17140.185: 99.1389% ( 5) 00:07:03.344 17140.185 - 17241.009: 99.1674% ( 5) 00:07:03.344 17241.009 - 17341.834: 99.2016% ( 6) 00:07:03.344 17341.834 - 17442.658: 99.2302% ( 5) 00:07:03.344 17442.658 - 17543.483: 99.2587% ( 5) 00:07:03.344 17543.483 - 17644.308: 99.2701% ( 2) 00:07:03.344 27222.646 - 27424.295: 99.2872% ( 3) 00:07:03.344 27424.295 - 27625.945: 99.3157% ( 5) 00:07:03.344 27625.945 - 27827.594: 99.3442% ( 5) 00:07:03.344 27827.594 - 28029.243: 99.3784% ( 6) 00:07:03.344 28029.243 - 28230.892: 99.4126% ( 6) 00:07:03.344 28230.892 - 28432.542: 99.4469% ( 6) 00:07:03.344 28432.542 - 28634.191: 99.4754% ( 5) 00:07:03.344 28634.191 - 28835.840: 99.5039% ( 5) 00:07:03.344 28835.840 - 29037.489: 99.5381% ( 6) 00:07:03.344 29037.489 - 29239.138: 99.5723% ( 6) 00:07:03.344 29239.138 - 29440.788: 99.6065% ( 6) 00:07:03.344 29440.788 - 29642.437: 99.6350% ( 5) 00:07:03.344 36095.212 - 36296.862: 99.6693% ( 6) 00:07:03.344 36296.862 - 36498.511: 99.7035% ( 6) 00:07:03.344 36498.511 - 36700.160: 99.7434% ( 7) 00:07:03.344 36700.160 - 36901.809: 99.7833% ( 7) 00:07:03.344 36901.809 - 37103.458: 99.8232% ( 7) 00:07:03.344 37103.458 - 37305.108: 99.8631% ( 7) 00:07:03.344 37305.108 - 37506.757: 99.9031% ( 7) 00:07:03.344 37506.757 - 37708.406: 99.9373% ( 6) 00:07:03.344 37708.406 - 37910.055: 99.9772% ( 7) 00:07:03.344 37910.055 - 38111.705: 100.0000% ( 4) 00:07:03.344 00:07:03.344 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:03.344 ============================================================================== 00:07:03.344 Range in us Cumulative IO count 00:07:03.344 5646.178 - 5671.385: 0.0171% ( 3) 00:07:03.344 5671.385 - 5696.591: 0.0228% ( 1) 00:07:03.344 5696.591 - 5721.797: 0.0399% ( 3) 00:07:03.344 5721.797 - 5747.003: 0.1540% ( 20) 00:07:03.344 5747.003 - 5772.209: 0.3935% ( 42) 00:07:03.344 5772.209 - 5797.415: 0.7584% ( 64) 00:07:03.344 5797.415 - 5822.622: 1.4313% ( 118) 00:07:03.344 5822.622 - 5847.828: 2.4464% ( 178) 00:07:03.344 5847.828 - 5873.034: 3.7922% ( 236) 00:07:03.344 5873.034 - 5898.240: 5.2464% ( 255) 00:07:03.344 5898.240 - 5923.446: 6.8203% ( 276) 00:07:03.344 5923.446 - 5948.652: 8.5880% ( 310) 00:07:03.344 5948.652 - 5973.858: 10.5269% ( 340) 00:07:03.344 5973.858 - 5999.065: 12.5114% ( 348) 00:07:03.344 5999.065 - 6024.271: 14.6156% ( 369) 00:07:03.344 6024.271 - 6049.477: 16.7313% ( 371) 00:07:03.344 6049.477 - 6074.683: 18.9439% ( 388) 00:07:03.344 6074.683 - 6099.889: 21.0538% ( 370) 00:07:03.344 6099.889 - 6125.095: 23.2322% ( 382) 00:07:03.344 6125.095 - 6150.302: 25.4106% ( 382) 00:07:03.344 6150.302 - 6175.508: 27.5719% ( 379) 00:07:03.344 6175.508 - 6200.714: 29.7331% ( 379) 00:07:03.344 6200.714 - 6225.920: 31.9457% ( 388) 00:07:03.344 6225.920 - 6251.126: 34.1013% ( 378) 00:07:03.344 6251.126 - 6276.332: 36.3310% ( 391) 00:07:03.344 6276.332 - 6301.538: 38.5151% ( 383) 00:07:03.344 6301.538 - 6326.745: 40.6934% ( 382) 00:07:03.344 6326.745 - 6351.951: 42.9060% ( 388) 00:07:03.344 6351.951 - 6377.157: 45.1471% ( 393) 00:07:03.344 6377.157 - 6402.363: 47.3654% ( 389) 00:07:03.344 6402.363 - 6427.569: 49.5609% ( 385) 00:07:03.344 6427.569 - 6452.775: 51.7963% ( 392) 00:07:03.344 6452.775 - 6503.188: 56.1417% ( 762) 00:07:03.344 6503.188 - 6553.600: 60.5155% ( 767) 00:07:03.344 6553.600 - 6604.012: 64.5928% ( 715) 00:07:03.344 6604.012 - 6654.425: 68.2539% ( 642) 00:07:03.344 6654.425 - 6704.837: 70.9569% ( 474) 00:07:03.344 6704.837 - 6755.249: 72.6905% ( 304) 00:07:03.344 6755.249 - 6805.662: 73.8766% ( 208) 00:07:03.344 6805.662 - 6856.074: 74.6521% ( 136) 00:07:03.344 6856.074 - 6906.486: 75.2794% ( 110) 00:07:03.344 6906.486 - 6956.898: 75.7470% ( 82) 00:07:03.344 6956.898 - 7007.311: 76.1462% ( 70) 00:07:03.344 7007.311 - 7057.723: 76.5283% ( 67) 00:07:03.344 7057.723 - 7108.135: 76.9731% ( 78) 00:07:03.344 7108.135 - 7158.548: 77.3666% ( 69) 00:07:03.344 7158.548 - 7208.960: 77.7258% ( 63) 00:07:03.344 7208.960 - 7259.372: 78.1307% ( 71) 00:07:03.344 7259.372 - 7309.785: 78.5014% ( 65) 00:07:03.344 7309.785 - 7360.197: 78.8777% ( 66) 00:07:03.344 7360.197 - 7410.609: 79.2370% ( 63) 00:07:03.344 7410.609 - 7461.022: 79.6191% ( 67) 00:07:03.344 7461.022 - 7511.434: 79.9783% ( 63) 00:07:03.344 7511.434 - 7561.846: 80.3091% ( 58) 00:07:03.344 7561.846 - 7612.258: 80.6512% ( 60) 00:07:03.344 7612.258 - 7662.671: 80.9877% ( 59) 00:07:03.344 7662.671 - 7713.083: 81.3013% ( 55) 00:07:03.344 7713.083 - 7763.495: 81.6435% ( 60) 00:07:03.344 7763.495 - 7813.908: 81.9457% ( 53) 00:07:03.344 7813.908 - 7864.320: 82.3107% ( 64) 00:07:03.344 7864.320 - 7914.732: 82.5901% ( 49) 00:07:03.344 7914.732 - 7965.145: 82.8581% ( 47) 00:07:03.344 7965.145 - 8015.557: 83.0862% ( 40) 00:07:03.344 8015.557 - 8065.969: 83.3314% ( 43) 00:07:03.344 8065.969 - 8116.382: 83.5709% ( 42) 00:07:03.344 8116.382 - 8166.794: 83.8447% ( 48) 00:07:03.344 8166.794 - 8217.206: 84.0842% ( 42) 00:07:03.344 8217.206 - 8267.618: 84.3009% ( 38) 00:07:03.344 8267.618 - 8318.031: 84.5005% ( 35) 00:07:03.344 8318.031 - 8368.443: 84.6943% ( 34) 00:07:03.344 8368.443 - 8418.855: 84.9339% ( 42) 00:07:03.344 8418.855 - 8469.268: 85.1334% ( 35) 00:07:03.344 8469.268 - 8519.680: 85.3273% ( 34) 00:07:03.344 8519.680 - 8570.092: 85.5383% ( 37) 00:07:03.344 8570.092 - 8620.505: 85.7094% ( 30) 00:07:03.344 8620.505 - 8670.917: 85.8805% ( 30) 00:07:03.344 8670.917 - 8721.329: 86.0858% ( 36) 00:07:03.344 8721.329 - 8771.742: 86.2283% ( 25) 00:07:03.344 8771.742 - 8822.154: 86.3538% ( 22) 00:07:03.344 8822.154 - 8872.566: 86.4735% ( 21) 00:07:03.344 8872.566 - 8922.978: 86.5990% ( 22) 00:07:03.344 8922.978 - 8973.391: 86.7188% ( 21) 00:07:03.344 8973.391 - 9023.803: 86.8385% ( 21) 00:07:03.344 9023.803 - 9074.215: 86.9411% ( 18) 00:07:03.344 9074.215 - 9124.628: 87.0723% ( 23) 00:07:03.344 9124.628 - 9175.040: 87.2035% ( 23) 00:07:03.344 9175.040 - 9225.452: 87.3232% ( 21) 00:07:03.344 9225.452 - 9275.865: 87.4772% ( 27) 00:07:03.344 9275.865 - 9326.277: 87.6996% ( 39) 00:07:03.344 9326.277 - 9376.689: 87.8821% ( 32) 00:07:03.344 9376.689 - 9427.102: 88.0531% ( 30) 00:07:03.344 9427.102 - 9477.514: 88.2527% ( 35) 00:07:03.344 9477.514 - 9527.926: 88.4181% ( 29) 00:07:03.344 9527.926 - 9578.338: 88.6234% ( 36) 00:07:03.344 9578.338 - 9628.751: 88.8230% ( 35) 00:07:03.344 9628.751 - 9679.163: 89.0397% ( 38) 00:07:03.344 9679.163 - 9729.575: 89.2735% ( 41) 00:07:03.344 9729.575 - 9779.988: 89.5244% ( 44) 00:07:03.344 9779.988 - 9830.400: 89.7297% ( 36) 00:07:03.344 9830.400 - 9880.812: 89.9407% ( 37) 00:07:03.344 9880.812 - 9931.225: 90.1517% ( 37) 00:07:03.344 9931.225 - 9981.637: 90.3684% ( 38) 00:07:03.344 9981.637 - 10032.049: 90.5566% ( 33) 00:07:03.344 10032.049 - 10082.462: 90.7219% ( 29) 00:07:03.344 10082.462 - 10132.874: 90.8645% ( 25) 00:07:03.344 10132.874 - 10183.286: 90.9729% ( 19) 00:07:03.344 10183.286 - 10233.698: 91.0641% ( 16) 00:07:03.344 10233.698 - 10284.111: 91.1553% ( 16) 00:07:03.344 10284.111 - 10334.523: 91.2637% ( 19) 00:07:03.344 10334.523 - 10384.935: 91.3948% ( 23) 00:07:03.344 10384.935 - 10435.348: 91.5032% ( 19) 00:07:03.344 10435.348 - 10485.760: 91.6229% ( 21) 00:07:03.344 10485.760 - 10536.172: 91.7256% ( 18) 00:07:03.344 10536.172 - 10586.585: 91.8453% ( 21) 00:07:03.344 10586.585 - 10636.997: 92.0050% ( 28) 00:07:03.344 10636.997 - 10687.409: 92.1476% ( 25) 00:07:03.344 10687.409 - 10737.822: 92.3187% ( 30) 00:07:03.345 10737.822 - 10788.234: 92.5068% ( 33) 00:07:03.345 10788.234 - 10838.646: 92.6722% ( 29) 00:07:03.345 10838.646 - 10889.058: 92.8547% ( 32) 00:07:03.345 10889.058 - 10939.471: 93.0087% ( 27) 00:07:03.345 10939.471 - 10989.883: 93.1740% ( 29) 00:07:03.345 10989.883 - 11040.295: 93.3337% ( 28) 00:07:03.345 11040.295 - 11090.708: 93.4934% ( 28) 00:07:03.345 11090.708 - 11141.120: 93.6417% ( 26) 00:07:03.345 11141.120 - 11191.532: 93.7671% ( 22) 00:07:03.345 11191.532 - 11241.945: 93.8926% ( 22) 00:07:03.345 11241.945 - 11292.357: 94.0123% ( 21) 00:07:03.345 11292.357 - 11342.769: 94.1264% ( 20) 00:07:03.345 11342.769 - 11393.182: 94.2404% ( 20) 00:07:03.345 11393.182 - 11443.594: 94.3260% ( 15) 00:07:03.345 11443.594 - 11494.006: 94.3944% ( 12) 00:07:03.345 11494.006 - 11544.418: 94.4514% ( 10) 00:07:03.345 11544.418 - 11594.831: 94.5198% ( 12) 00:07:03.345 11594.831 - 11645.243: 94.5997% ( 14) 00:07:03.345 11645.243 - 11695.655: 94.6396% ( 7) 00:07:03.345 11695.655 - 11746.068: 94.6909% ( 9) 00:07:03.345 11746.068 - 11796.480: 94.7536% ( 11) 00:07:03.345 11796.480 - 11846.892: 94.8620% ( 19) 00:07:03.345 11846.892 - 11897.305: 94.9361% ( 13) 00:07:03.345 11897.305 - 11947.717: 95.0217% ( 15) 00:07:03.345 11947.717 - 11998.129: 95.1129% ( 16) 00:07:03.345 11998.129 - 12048.542: 95.1927% ( 14) 00:07:03.345 12048.542 - 12098.954: 95.2954% ( 18) 00:07:03.345 12098.954 - 12149.366: 95.3866% ( 16) 00:07:03.345 12149.366 - 12199.778: 95.4722% ( 15) 00:07:03.345 12199.778 - 12250.191: 95.5691% ( 17) 00:07:03.345 12250.191 - 12300.603: 95.6490% ( 14) 00:07:03.345 12300.603 - 12351.015: 95.7288% ( 14) 00:07:03.345 12351.015 - 12401.428: 95.8086% ( 14) 00:07:03.345 12401.428 - 12451.840: 95.8999% ( 16) 00:07:03.345 12451.840 - 12502.252: 95.9911% ( 16) 00:07:03.345 12502.252 - 12552.665: 96.0766% ( 15) 00:07:03.345 12552.665 - 12603.077: 96.1736% ( 17) 00:07:03.345 12603.077 - 12653.489: 96.2534% ( 14) 00:07:03.345 12653.489 - 12703.902: 96.3390% ( 15) 00:07:03.345 12703.902 - 12754.314: 96.4131% ( 13) 00:07:03.345 12754.314 - 12804.726: 96.4872% ( 13) 00:07:03.345 12804.726 - 12855.138: 96.5671% ( 14) 00:07:03.345 12855.138 - 12905.551: 96.6583% ( 16) 00:07:03.345 12905.551 - 13006.375: 96.8408% ( 32) 00:07:03.345 13006.375 - 13107.200: 96.9833% ( 25) 00:07:03.345 13107.200 - 13208.025: 97.1943% ( 37) 00:07:03.345 13208.025 - 13308.849: 97.3654% ( 30) 00:07:03.345 13308.849 - 13409.674: 97.5422% ( 31) 00:07:03.345 13409.674 - 13510.498: 97.7019% ( 28) 00:07:03.345 13510.498 - 13611.323: 97.8387% ( 24) 00:07:03.345 13611.323 - 13712.148: 97.9585% ( 21) 00:07:03.345 13712.148 - 13812.972: 98.1068% ( 26) 00:07:03.345 13812.972 - 13913.797: 98.2607% ( 27) 00:07:03.345 13913.797 - 14014.622: 98.3691% ( 19) 00:07:03.345 14014.622 - 14115.446: 98.4204% ( 9) 00:07:03.345 14115.446 - 14216.271: 98.4375% ( 3) 00:07:03.345 14216.271 - 14317.095: 98.4945% ( 10) 00:07:03.345 14317.095 - 14417.920: 98.5173% ( 4) 00:07:03.345 14417.920 - 14518.745: 98.5458% ( 5) 00:07:03.345 14518.745 - 14619.569: 98.5744% ( 5) 00:07:03.345 14619.569 - 14720.394: 98.6029% ( 5) 00:07:03.345 14720.394 - 14821.218: 98.6314% ( 5) 00:07:03.345 14821.218 - 14922.043: 98.6485% ( 3) 00:07:03.345 14922.043 - 15022.868: 98.6713% ( 4) 00:07:03.345 15022.868 - 15123.692: 98.6941% ( 4) 00:07:03.345 15123.692 - 15224.517: 98.7169% ( 4) 00:07:03.345 15224.517 - 15325.342: 98.7283% ( 2) 00:07:03.345 15325.342 - 15426.166: 98.7454% ( 3) 00:07:03.345 15426.166 - 15526.991: 98.7625% ( 3) 00:07:03.345 15526.991 - 15627.815: 98.7797% ( 3) 00:07:03.345 15627.815 - 15728.640: 98.7968% ( 3) 00:07:03.345 15728.640 - 15829.465: 98.8139% ( 3) 00:07:03.345 15829.465 - 15930.289: 98.8310% ( 3) 00:07:03.345 15930.289 - 16031.114: 98.8481% ( 3) 00:07:03.345 16031.114 - 16131.938: 98.8937% ( 8) 00:07:03.345 16131.938 - 16232.763: 98.9279% ( 6) 00:07:03.345 16232.763 - 16333.588: 98.9678% ( 7) 00:07:03.345 16333.588 - 16434.412: 99.0192% ( 9) 00:07:03.345 16434.412 - 16535.237: 99.0477% ( 5) 00:07:03.345 16535.237 - 16636.062: 99.0705% ( 4) 00:07:03.345 16636.062 - 16736.886: 99.0933% ( 4) 00:07:03.345 16736.886 - 16837.711: 99.1218% ( 5) 00:07:03.345 16837.711 - 16938.535: 99.1560% ( 6) 00:07:03.345 16938.535 - 17039.360: 99.1845% ( 5) 00:07:03.345 17039.360 - 17140.185: 99.2130% ( 5) 00:07:03.345 17140.185 - 17241.009: 99.2416% ( 5) 00:07:03.345 17241.009 - 17341.834: 99.2644% ( 4) 00:07:03.345 17341.834 - 17442.658: 99.2701% ( 1) 00:07:03.345 26214.400 - 26416.049: 99.2986% ( 5) 00:07:03.345 26416.049 - 26617.698: 99.3271% ( 5) 00:07:03.345 26617.698 - 26819.348: 99.3442% ( 3) 00:07:03.345 26819.348 - 27020.997: 99.3727% ( 5) 00:07:03.345 27020.997 - 27222.646: 99.3955% ( 4) 00:07:03.345 27222.646 - 27424.295: 99.4240% ( 5) 00:07:03.345 27424.295 - 27625.945: 99.4583% ( 6) 00:07:03.345 27625.945 - 27827.594: 99.4982% ( 7) 00:07:03.345 27827.594 - 28029.243: 99.5381% ( 7) 00:07:03.345 28029.243 - 28230.892: 99.5780% ( 7) 00:07:03.345 28230.892 - 28432.542: 99.6179% ( 7) 00:07:03.345 28432.542 - 28634.191: 99.6350% ( 3) 00:07:03.345 34482.018 - 34683.668: 99.6407% ( 1) 00:07:03.345 34683.668 - 34885.317: 99.6750% ( 6) 00:07:03.345 34885.317 - 35086.966: 99.7092% ( 6) 00:07:03.345 35086.966 - 35288.615: 99.7491% ( 7) 00:07:03.345 35288.615 - 35490.265: 99.7890% ( 7) 00:07:03.345 35490.265 - 35691.914: 99.8232% ( 6) 00:07:03.345 35691.914 - 35893.563: 99.8631% ( 7) 00:07:03.345 35893.563 - 36095.212: 99.9031% ( 7) 00:07:03.345 36095.212 - 36296.862: 99.9430% ( 7) 00:07:03.345 36296.862 - 36498.511: 99.9829% ( 7) 00:07:03.345 36498.511 - 36700.160: 100.0000% ( 3) 00:07:03.345 00:07:03.345 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:03.345 ============================================================================== 00:07:03.345 Range in us Cumulative IO count 00:07:03.345 5671.385 - 5696.591: 0.0455% ( 8) 00:07:03.345 5696.591 - 5721.797: 0.0909% ( 8) 00:07:03.345 5721.797 - 5747.003: 0.2159% ( 22) 00:07:03.345 5747.003 - 5772.209: 0.4773% ( 46) 00:07:03.345 5772.209 - 5797.415: 0.9205% ( 78) 00:07:03.345 5797.415 - 5822.622: 1.5284% ( 107) 00:07:03.345 5822.622 - 5847.828: 2.5398% ( 178) 00:07:03.345 5847.828 - 5873.034: 3.8125% ( 224) 00:07:03.345 5873.034 - 5898.240: 5.3352% ( 268) 00:07:03.345 5898.240 - 5923.446: 7.1023% ( 311) 00:07:03.345 5923.446 - 5948.652: 8.8636% ( 310) 00:07:03.345 5948.652 - 5973.858: 10.7386% ( 330) 00:07:03.345 5973.858 - 5999.065: 12.6364% ( 334) 00:07:03.345 5999.065 - 6024.271: 14.6818% ( 360) 00:07:03.345 6024.271 - 6049.477: 16.7784% ( 369) 00:07:03.345 6049.477 - 6074.683: 18.9773% ( 387) 00:07:03.345 6074.683 - 6099.889: 21.1136% ( 376) 00:07:03.345 6099.889 - 6125.095: 23.2955% ( 384) 00:07:03.345 6125.095 - 6150.302: 25.5455% ( 396) 00:07:03.345 6150.302 - 6175.508: 27.7159% ( 382) 00:07:03.345 6175.508 - 6200.714: 29.8182% ( 370) 00:07:03.345 6200.714 - 6225.920: 31.9432% ( 374) 00:07:03.345 6225.920 - 6251.126: 34.1307% ( 385) 00:07:03.345 6251.126 - 6276.332: 36.2955% ( 381) 00:07:03.345 6276.332 - 6301.538: 38.4773% ( 384) 00:07:03.345 6301.538 - 6326.745: 40.6307% ( 379) 00:07:03.345 6326.745 - 6351.951: 42.8125% ( 384) 00:07:03.345 6351.951 - 6377.157: 45.0227% ( 389) 00:07:03.345 6377.157 - 6402.363: 47.1761% ( 379) 00:07:03.345 6402.363 - 6427.569: 49.3068% ( 375) 00:07:03.345 6427.569 - 6452.775: 51.5114% ( 388) 00:07:03.345 6452.775 - 6503.188: 55.8864% ( 770) 00:07:03.345 6503.188 - 6553.600: 60.1761% ( 755) 00:07:03.345 6553.600 - 6604.012: 64.2784% ( 722) 00:07:03.345 6604.012 - 6654.425: 67.9261% ( 642) 00:07:03.345 6654.425 - 6704.837: 70.5625% ( 464) 00:07:03.345 6704.837 - 6755.249: 72.1705% ( 283) 00:07:03.345 6755.249 - 6805.662: 73.3523% ( 208) 00:07:03.345 6805.662 - 6856.074: 74.1023% ( 132) 00:07:03.345 6856.074 - 6906.486: 74.6534% ( 97) 00:07:03.345 6906.486 - 6956.898: 75.0682% ( 73) 00:07:03.345 6956.898 - 7007.311: 75.4034% ( 59) 00:07:03.345 7007.311 - 7057.723: 75.6818% ( 49) 00:07:03.345 7057.723 - 7108.135: 76.0568% ( 66) 00:07:03.345 7108.135 - 7158.548: 76.4602% ( 71) 00:07:03.345 7158.548 - 7208.960: 76.8352% ( 66) 00:07:03.346 7208.960 - 7259.372: 77.2443% ( 72) 00:07:03.346 7259.372 - 7309.785: 77.6364% ( 69) 00:07:03.346 7309.785 - 7360.197: 78.0966% ( 81) 00:07:03.346 7360.197 - 7410.609: 78.5625% ( 82) 00:07:03.346 7410.609 - 7461.022: 79.0000% ( 77) 00:07:03.346 7461.022 - 7511.434: 79.3750% ( 66) 00:07:03.346 7511.434 - 7561.846: 79.7557% ( 67) 00:07:03.346 7561.846 - 7612.258: 80.1364% ( 67) 00:07:03.346 7612.258 - 7662.671: 80.5000% ( 64) 00:07:03.346 7662.671 - 7713.083: 80.8523% ( 62) 00:07:03.346 7713.083 - 7763.495: 81.2159% ( 64) 00:07:03.346 7763.495 - 7813.908: 81.5511% ( 59) 00:07:03.346 7813.908 - 7864.320: 81.8920% ( 60) 00:07:03.346 7864.320 - 7914.732: 82.2045% ( 55) 00:07:03.346 7914.732 - 7965.145: 82.5455% ( 60) 00:07:03.346 7965.145 - 8015.557: 82.9148% ( 65) 00:07:03.346 8015.557 - 8065.969: 83.1932% ( 49) 00:07:03.346 8065.969 - 8116.382: 83.4830% ( 51) 00:07:03.346 8116.382 - 8166.794: 83.7500% ( 47) 00:07:03.346 8166.794 - 8217.206: 83.9830% ( 41) 00:07:03.346 8217.206 - 8267.618: 84.2102% ( 40) 00:07:03.346 8267.618 - 8318.031: 84.4148% ( 36) 00:07:03.346 8318.031 - 8368.443: 84.6364% ( 39) 00:07:03.346 8368.443 - 8418.855: 84.8580% ( 39) 00:07:03.346 8418.855 - 8469.268: 85.1023% ( 43) 00:07:03.346 8469.268 - 8519.680: 85.3409% ( 42) 00:07:03.346 8519.680 - 8570.092: 85.5284% ( 33) 00:07:03.346 8570.092 - 8620.505: 85.7159% ( 33) 00:07:03.346 8620.505 - 8670.917: 85.8920% ( 31) 00:07:03.346 8670.917 - 8721.329: 86.0682% ( 31) 00:07:03.346 8721.329 - 8771.742: 86.2386% ( 30) 00:07:03.346 8771.742 - 8822.154: 86.4205% ( 32) 00:07:03.346 8822.154 - 8872.566: 86.6136% ( 34) 00:07:03.346 8872.566 - 8922.978: 86.7784% ( 29) 00:07:03.346 8922.978 - 8973.391: 86.9261% ( 26) 00:07:03.346 8973.391 - 9023.803: 87.1023% ( 31) 00:07:03.346 9023.803 - 9074.215: 87.2386% ( 24) 00:07:03.346 9074.215 - 9124.628: 87.3920% ( 27) 00:07:03.346 9124.628 - 9175.040: 87.5284% ( 24) 00:07:03.346 9175.040 - 9225.452: 87.7159% ( 33) 00:07:03.346 9225.452 - 9275.865: 87.8807% ( 29) 00:07:03.346 9275.865 - 9326.277: 87.9943% ( 20) 00:07:03.346 9326.277 - 9376.689: 88.1080% ( 20) 00:07:03.346 9376.689 - 9427.102: 88.2216% ( 20) 00:07:03.346 9427.102 - 9477.514: 88.3295% ( 19) 00:07:03.346 9477.514 - 9527.926: 88.4318% ( 18) 00:07:03.346 9527.926 - 9578.338: 88.5284% ( 17) 00:07:03.346 9578.338 - 9628.751: 88.6307% ( 18) 00:07:03.346 9628.751 - 9679.163: 88.7500% ( 21) 00:07:03.346 9679.163 - 9729.575: 88.8580% ( 19) 00:07:03.346 9729.575 - 9779.988: 89.0000% ( 25) 00:07:03.346 9779.988 - 9830.400: 89.1534% ( 27) 00:07:03.346 9830.400 - 9880.812: 89.3011% ( 26) 00:07:03.346 9880.812 - 9931.225: 89.4659% ( 29) 00:07:03.346 9931.225 - 9981.637: 89.5852% ( 21) 00:07:03.346 9981.637 - 10032.049: 89.7273% ( 25) 00:07:03.346 10032.049 - 10082.462: 89.8523% ( 22) 00:07:03.346 10082.462 - 10132.874: 90.0000% ( 26) 00:07:03.346 10132.874 - 10183.286: 90.1989% ( 35) 00:07:03.346 10183.286 - 10233.698: 90.3295% ( 23) 00:07:03.346 10233.698 - 10284.111: 90.4943% ( 29) 00:07:03.346 10284.111 - 10334.523: 90.6705% ( 31) 00:07:03.346 10334.523 - 10384.935: 90.8523% ( 32) 00:07:03.346 10384.935 - 10435.348: 91.0455% ( 34) 00:07:03.346 10435.348 - 10485.760: 91.2500% ( 36) 00:07:03.346 10485.760 - 10536.172: 91.4545% ( 36) 00:07:03.346 10536.172 - 10586.585: 91.6534% ( 35) 00:07:03.346 10586.585 - 10636.997: 91.8466% ( 34) 00:07:03.346 10636.997 - 10687.409: 92.0398% ( 34) 00:07:03.346 10687.409 - 10737.822: 92.1989% ( 28) 00:07:03.346 10737.822 - 10788.234: 92.3580% ( 28) 00:07:03.346 10788.234 - 10838.646: 92.5227% ( 29) 00:07:03.346 10838.646 - 10889.058: 92.6761% ( 27) 00:07:03.346 10889.058 - 10939.471: 92.7955% ( 21) 00:07:03.346 10939.471 - 10989.883: 92.8977% ( 18) 00:07:03.346 10989.883 - 11040.295: 92.9943% ( 17) 00:07:03.346 11040.295 - 11090.708: 93.0966% ( 18) 00:07:03.346 11090.708 - 11141.120: 93.1591% ( 11) 00:07:03.346 11141.120 - 11191.532: 93.2386% ( 14) 00:07:03.346 11191.532 - 11241.945: 93.3352% ( 17) 00:07:03.346 11241.945 - 11292.357: 93.4148% ( 14) 00:07:03.346 11292.357 - 11342.769: 93.4659% ( 9) 00:07:03.346 11342.769 - 11393.182: 93.5114% ( 8) 00:07:03.346 11393.182 - 11443.594: 93.5852% ( 13) 00:07:03.346 11443.594 - 11494.006: 93.6534% ( 12) 00:07:03.346 11494.006 - 11544.418: 93.7216% ( 12) 00:07:03.346 11544.418 - 11594.831: 93.7670% ( 8) 00:07:03.346 11594.831 - 11645.243: 93.8239% ( 10) 00:07:03.346 11645.243 - 11695.655: 93.8977% ( 13) 00:07:03.346 11695.655 - 11746.068: 93.9602% ( 11) 00:07:03.346 11746.068 - 11796.480: 94.0682% ( 19) 00:07:03.346 11796.480 - 11846.892: 94.2045% ( 24) 00:07:03.346 11846.892 - 11897.305: 94.3125% ( 19) 00:07:03.346 11897.305 - 11947.717: 94.4318% ( 21) 00:07:03.346 11947.717 - 11998.129: 94.5852% ( 27) 00:07:03.346 11998.129 - 12048.542: 94.7102% ( 22) 00:07:03.346 12048.542 - 12098.954: 94.8409% ( 23) 00:07:03.346 12098.954 - 12149.366: 94.9659% ( 22) 00:07:03.346 12149.366 - 12199.778: 95.0909% ( 22) 00:07:03.346 12199.778 - 12250.191: 95.2159% ( 22) 00:07:03.346 12250.191 - 12300.603: 95.3295% ( 20) 00:07:03.346 12300.603 - 12351.015: 95.4432% ( 20) 00:07:03.346 12351.015 - 12401.428: 95.5739% ( 23) 00:07:03.346 12401.428 - 12451.840: 95.6875% ( 20) 00:07:03.346 12451.840 - 12502.252: 95.7784% ( 16) 00:07:03.346 12502.252 - 12552.665: 95.8807% ( 18) 00:07:03.346 12552.665 - 12603.077: 95.9716% ( 16) 00:07:03.346 12603.077 - 12653.489: 96.0511% ( 14) 00:07:03.346 12653.489 - 12703.902: 96.1477% ( 17) 00:07:03.346 12703.902 - 12754.314: 96.2500% ( 18) 00:07:03.346 12754.314 - 12804.726: 96.3295% ( 14) 00:07:03.346 12804.726 - 12855.138: 96.4205% ( 16) 00:07:03.346 12855.138 - 12905.551: 96.5227% ( 18) 00:07:03.346 12905.551 - 13006.375: 96.7102% ( 33) 00:07:03.346 13006.375 - 13107.200: 96.8864% ( 31) 00:07:03.346 13107.200 - 13208.025: 97.1250% ( 42) 00:07:03.346 13208.025 - 13308.849: 97.3636% ( 42) 00:07:03.346 13308.849 - 13409.674: 97.5682% ( 36) 00:07:03.346 13409.674 - 13510.498: 97.7045% ( 24) 00:07:03.346 13510.498 - 13611.323: 97.8466% ( 25) 00:07:03.346 13611.323 - 13712.148: 97.9773% ( 23) 00:07:03.346 13712.148 - 13812.972: 98.1420% ( 29) 00:07:03.346 13812.972 - 13913.797: 98.2273% ( 15) 00:07:03.346 13913.797 - 14014.622: 98.3011% ( 13) 00:07:03.346 14014.622 - 14115.446: 98.3523% ( 9) 00:07:03.346 14115.446 - 14216.271: 98.4148% ( 11) 00:07:03.346 14216.271 - 14317.095: 98.4659% ( 9) 00:07:03.346 14317.095 - 14417.920: 98.5000% ( 6) 00:07:03.346 14417.920 - 14518.745: 98.5341% ( 6) 00:07:03.346 14518.745 - 14619.569: 98.5682% ( 6) 00:07:03.346 14619.569 - 14720.394: 98.6023% ( 6) 00:07:03.346 14720.394 - 14821.218: 98.6364% ( 6) 00:07:03.346 14821.218 - 14922.043: 98.6705% ( 6) 00:07:03.346 14922.043 - 15022.868: 98.7102% ( 7) 00:07:03.346 15022.868 - 15123.692: 98.7443% ( 6) 00:07:03.346 15123.692 - 15224.517: 98.7784% ( 6) 00:07:03.346 15224.517 - 15325.342: 98.8182% ( 7) 00:07:03.346 15325.342 - 15426.166: 98.8409% ( 4) 00:07:03.346 15426.166 - 15526.991: 98.8580% ( 3) 00:07:03.346 15526.991 - 15627.815: 98.8750% ( 3) 00:07:03.346 15627.815 - 15728.640: 98.8864% ( 2) 00:07:03.346 15728.640 - 15829.465: 98.9034% ( 3) 00:07:03.346 15829.465 - 15930.289: 98.9148% ( 2) 00:07:03.346 15930.289 - 16031.114: 98.9432% ( 5) 00:07:03.346 16031.114 - 16131.938: 98.9773% ( 6) 00:07:03.346 16131.938 - 16232.763: 99.0170% ( 7) 00:07:03.346 16232.763 - 16333.588: 99.0398% ( 4) 00:07:03.346 16333.588 - 16434.412: 99.0625% ( 4) 00:07:03.346 16434.412 - 16535.237: 99.0909% ( 5) 00:07:03.346 16535.237 - 16636.062: 99.1250% ( 6) 00:07:03.347 16636.062 - 16736.886: 99.1534% ( 5) 00:07:03.347 16736.886 - 16837.711: 99.1818% ( 5) 00:07:03.347 16837.711 - 16938.535: 99.2102% ( 5) 00:07:03.347 16938.535 - 17039.360: 99.2386% ( 5) 00:07:03.347 17039.360 - 17140.185: 99.2670% ( 5) 00:07:03.347 17140.185 - 17241.009: 99.2727% ( 1) 00:07:03.347 18854.203 - 18955.028: 99.2784% ( 1) 00:07:03.347 18955.028 - 19055.852: 99.3011% ( 4) 00:07:03.347 19055.852 - 19156.677: 99.3239% ( 4) 00:07:03.347 19156.677 - 19257.502: 99.3466% ( 4) 00:07:03.347 19257.502 - 19358.326: 99.3693% ( 4) 00:07:03.347 19358.326 - 19459.151: 99.3977% ( 5) 00:07:03.347 19459.151 - 19559.975: 99.4205% ( 4) 00:07:03.347 19559.975 - 19660.800: 99.4432% ( 4) 00:07:03.347 19660.800 - 19761.625: 99.4659% ( 4) 00:07:03.347 19761.625 - 19862.449: 99.4886% ( 4) 00:07:03.347 19862.449 - 19963.274: 99.5114% ( 4) 00:07:03.347 19963.274 - 20064.098: 99.5398% ( 5) 00:07:03.347 20064.098 - 20164.923: 99.5625% ( 4) 00:07:03.347 20164.923 - 20265.748: 99.5852% ( 4) 00:07:03.347 20265.748 - 20366.572: 99.6080% ( 4) 00:07:03.347 20366.572 - 20467.397: 99.6307% ( 4) 00:07:03.347 20467.397 - 20568.222: 99.6364% ( 1) 00:07:03.347 26214.400 - 26416.049: 99.6705% ( 6) 00:07:03.347 26416.049 - 26617.698: 99.7102% ( 7) 00:07:03.347 26617.698 - 26819.348: 99.7500% ( 7) 00:07:03.347 26819.348 - 27020.997: 99.7841% ( 6) 00:07:03.347 27020.997 - 27222.646: 99.8239% ( 7) 00:07:03.347 27222.646 - 27424.295: 99.8636% ( 7) 00:07:03.347 27424.295 - 27625.945: 99.9034% ( 7) 00:07:03.347 27625.945 - 27827.594: 99.9432% ( 7) 00:07:03.347 27827.594 - 28029.243: 99.9830% ( 7) 00:07:03.347 28029.243 - 28230.892: 100.0000% ( 3) 00:07:03.347 00:07:03.347 01:19:59 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:04.725 Initializing NVMe Controllers 00:07:04.725 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:04.725 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:04.725 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:04.725 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:04.725 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:04.725 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:04.725 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:04.725 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:04.725 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:04.725 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:04.725 Initialization complete. Launching workers. 00:07:04.725 ======================================================== 00:07:04.725 Latency(us) 00:07:04.725 Device Information : IOPS MiB/s Average min max 00:07:04.725 PCIE (0000:00:10.0) NSID 1 from core 0: 17371.71 203.57 7381.46 5665.89 32446.90 00:07:04.725 PCIE (0000:00:11.0) NSID 1 from core 0: 17371.71 203.57 7371.66 5725.34 30623.05 00:07:04.725 PCIE (0000:00:13.0) NSID 1 from core 0: 17371.71 203.57 7361.63 5733.01 29371.52 00:07:04.725 PCIE (0000:00:12.0) NSID 1 from core 0: 17371.71 203.57 7350.64 5694.73 27636.13 00:07:04.725 PCIE (0000:00:12.0) NSID 2 from core 0: 17371.71 203.57 7339.62 5777.69 25886.28 00:07:04.725 PCIE (0000:00:12.0) NSID 3 from core 0: 17435.58 204.32 7301.62 5666.63 20643.93 00:07:04.725 ======================================================== 00:07:04.725 Total : 104294.13 1222.20 7351.08 5665.89 32446.90 00:07:04.725 00:07:04.726 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:04.726 ================================================================================= 00:07:04.726 1.00000% : 6024.271us 00:07:04.726 10.00000% : 6452.775us 00:07:04.726 25.00000% : 6654.425us 00:07:04.726 50.00000% : 6956.898us 00:07:04.726 75.00000% : 7410.609us 00:07:04.726 90.00000% : 8318.031us 00:07:04.726 95.00000% : 9729.575us 00:07:04.726 98.00000% : 12098.954us 00:07:04.726 99.00000% : 13913.797us 00:07:04.726 99.50000% : 26617.698us 00:07:04.726 99.90000% : 32062.228us 00:07:04.726 99.99000% : 32465.526us 00:07:04.726 99.99900% : 32465.526us 00:07:04.726 99.99990% : 32465.526us 00:07:04.726 99.99999% : 32465.526us 00:07:04.726 00:07:04.726 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:04.726 ================================================================================= 00:07:04.726 1.00000% : 6074.683us 00:07:04.726 10.00000% : 6553.600us 00:07:04.726 25.00000% : 6704.837us 00:07:04.726 50.00000% : 6906.486us 00:07:04.726 75.00000% : 7360.197us 00:07:04.726 90.00000% : 8318.031us 00:07:04.726 95.00000% : 9527.926us 00:07:04.726 98.00000% : 11998.129us 00:07:04.726 99.00000% : 14317.095us 00:07:04.726 99.50000% : 24903.680us 00:07:04.726 99.90000% : 30247.385us 00:07:04.726 99.99000% : 30650.683us 00:07:04.726 99.99900% : 30650.683us 00:07:04.726 99.99990% : 30650.683us 00:07:04.726 99.99999% : 30650.683us 00:07:04.726 00:07:04.726 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:04.726 ================================================================================= 00:07:04.726 1.00000% : 6099.889us 00:07:04.726 10.00000% : 6503.188us 00:07:04.726 25.00000% : 6704.837us 00:07:04.726 50.00000% : 6906.486us 00:07:04.726 75.00000% : 7410.609us 00:07:04.726 90.00000% : 8418.855us 00:07:04.726 95.00000% : 9527.926us 00:07:04.726 98.00000% : 11695.655us 00:07:04.726 99.00000% : 13913.797us 00:07:04.726 99.50000% : 23895.434us 00:07:04.726 99.90000% : 29037.489us 00:07:04.726 99.99000% : 29440.788us 00:07:04.726 99.99900% : 29440.788us 00:07:04.726 99.99990% : 29440.788us 00:07:04.726 99.99999% : 29440.788us 00:07:04.726 00:07:04.726 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:04.726 ================================================================================= 00:07:04.726 1.00000% : 6099.889us 00:07:04.726 10.00000% : 6553.600us 00:07:04.726 25.00000% : 6704.837us 00:07:04.726 50.00000% : 6906.486us 00:07:04.726 75.00000% : 7410.609us 00:07:04.726 90.00000% : 8368.443us 00:07:04.726 95.00000% : 9477.514us 00:07:04.726 98.00000% : 11897.305us 00:07:04.726 99.00000% : 14216.271us 00:07:04.726 99.50000% : 22181.415us 00:07:04.726 99.90000% : 27222.646us 00:07:04.726 99.99000% : 27625.945us 00:07:04.726 99.99900% : 27827.594us 00:07:04.726 99.99990% : 27827.594us 00:07:04.726 99.99999% : 27827.594us 00:07:04.726 00:07:04.726 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:04.726 ================================================================================= 00:07:04.726 1.00000% : 6074.683us 00:07:04.726 10.00000% : 6553.600us 00:07:04.726 25.00000% : 6704.837us 00:07:04.726 50.00000% : 6906.486us 00:07:04.726 75.00000% : 7360.197us 00:07:04.726 90.00000% : 8318.031us 00:07:04.726 95.00000% : 9628.751us 00:07:04.726 98.00000% : 11796.480us 00:07:04.726 99.00000% : 14216.271us 00:07:04.726 99.50000% : 20568.222us 00:07:04.726 99.90000% : 25508.628us 00:07:04.726 99.99000% : 26012.751us 00:07:04.726 99.99900% : 26012.751us 00:07:04.726 99.99990% : 26012.751us 00:07:04.726 99.99999% : 26012.751us 00:07:04.726 00:07:04.726 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:04.726 ================================================================================= 00:07:04.726 1.00000% : 6074.683us 00:07:04.726 10.00000% : 6553.600us 00:07:04.726 25.00000% : 6704.837us 00:07:04.726 50.00000% : 6906.486us 00:07:04.726 75.00000% : 7410.609us 00:07:04.726 90.00000% : 8318.031us 00:07:04.726 95.00000% : 9628.751us 00:07:04.726 98.00000% : 11846.892us 00:07:04.726 99.00000% : 13812.972us 00:07:04.726 99.50000% : 14619.569us 00:07:04.726 99.90000% : 20265.748us 00:07:04.726 99.99000% : 20669.046us 00:07:04.726 99.99900% : 20669.046us 00:07:04.726 99.99990% : 20669.046us 00:07:04.726 99.99999% : 20669.046us 00:07:04.726 00:07:04.726 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:04.726 ============================================================================== 00:07:04.726 Range in us Cumulative IO count 00:07:04.726 5646.178 - 5671.385: 0.0057% ( 1) 00:07:04.726 5721.797 - 5747.003: 0.0230% ( 3) 00:07:04.726 5747.003 - 5772.209: 0.0287% ( 1) 00:07:04.726 5797.415 - 5822.622: 0.0460% ( 3) 00:07:04.726 5822.622 - 5847.828: 0.0689% ( 4) 00:07:04.726 5847.828 - 5873.034: 0.2183% ( 26) 00:07:04.726 5873.034 - 5898.240: 0.3102% ( 16) 00:07:04.726 5898.240 - 5923.446: 0.4021% ( 16) 00:07:04.726 5923.446 - 5948.652: 0.5113% ( 19) 00:07:04.726 5948.652 - 5973.858: 0.6491% ( 24) 00:07:04.726 5973.858 - 5999.065: 0.8215% ( 30) 00:07:04.726 5999.065 - 6024.271: 1.0512% ( 40) 00:07:04.726 6024.271 - 6049.477: 1.3212% ( 47) 00:07:04.726 6049.477 - 6074.683: 1.6027% ( 49) 00:07:04.726 6074.683 - 6099.889: 1.8842% ( 49) 00:07:04.726 6099.889 - 6125.095: 2.1140% ( 40) 00:07:04.726 6125.095 - 6150.302: 2.3150% ( 35) 00:07:04.726 6150.302 - 6175.508: 2.5046% ( 33) 00:07:04.726 6175.508 - 6200.714: 2.7344% ( 40) 00:07:04.726 6200.714 - 6225.920: 3.0044% ( 47) 00:07:04.726 6225.920 - 6251.126: 3.2801% ( 48) 00:07:04.726 6251.126 - 6276.332: 3.6592% ( 66) 00:07:04.726 6276.332 - 6301.538: 4.1360% ( 83) 00:07:04.726 6301.538 - 6326.745: 4.7679% ( 110) 00:07:04.726 6326.745 - 6351.951: 5.7330% ( 168) 00:07:04.726 6351.951 - 6377.157: 6.8302% ( 191) 00:07:04.726 6377.157 - 6402.363: 7.9733% ( 199) 00:07:04.726 6402.363 - 6427.569: 9.3118% ( 233) 00:07:04.726 6427.569 - 6452.775: 10.9720% ( 289) 00:07:04.726 6452.775 - 6503.188: 14.6829% ( 646) 00:07:04.726 6503.188 - 6553.600: 18.6926% ( 698) 00:07:04.726 6553.600 - 6604.012: 23.4260% ( 824) 00:07:04.726 6604.012 - 6654.425: 28.2227% ( 835) 00:07:04.726 6654.425 - 6704.837: 33.2204% ( 870) 00:07:04.726 6704.837 - 6755.249: 37.5402% ( 752) 00:07:04.726 6755.249 - 6805.662: 41.5326% ( 695) 00:07:04.726 6805.662 - 6856.074: 45.3757% ( 669) 00:07:04.726 6856.074 - 6906.486: 48.9947% ( 630) 00:07:04.726 6906.486 - 6956.898: 52.4701% ( 605) 00:07:04.726 6956.898 - 7007.311: 55.8077% ( 581) 00:07:04.726 7007.311 - 7057.723: 59.0591% ( 566) 00:07:04.726 7057.723 - 7108.135: 61.7934% ( 476) 00:07:04.726 7108.135 - 7158.548: 64.5106% ( 473) 00:07:04.726 7158.548 - 7208.960: 66.9922% ( 432) 00:07:04.726 7208.960 - 7259.372: 69.6978% ( 471) 00:07:04.726 7259.372 - 7309.785: 71.7084% ( 350) 00:07:04.726 7309.785 - 7360.197: 73.7132% ( 349) 00:07:04.726 7360.197 - 7410.609: 75.3274% ( 281) 00:07:04.726 7410.609 - 7461.022: 76.7004% ( 239) 00:07:04.726 7461.022 - 7511.434: 78.0905% ( 242) 00:07:04.726 7511.434 - 7561.846: 79.4118% ( 230) 00:07:04.726 7561.846 - 7612.258: 80.4400% ( 179) 00:07:04.726 7612.258 - 7662.671: 81.3477% ( 158) 00:07:04.726 7662.671 - 7713.083: 82.3300% ( 171) 00:07:04.726 7713.083 - 7763.495: 83.3525% ( 178) 00:07:04.726 7763.495 - 7813.908: 84.3176% ( 168) 00:07:04.726 7813.908 - 7864.320: 85.0758% ( 132) 00:07:04.726 7864.320 - 7914.732: 85.8513% ( 135) 00:07:04.726 7914.732 - 7965.145: 86.5522% ( 122) 00:07:04.726 7965.145 - 8015.557: 87.2300% ( 118) 00:07:04.726 8015.557 - 8065.969: 87.8676% ( 111) 00:07:04.726 8065.969 - 8116.382: 88.3904% ( 91) 00:07:04.727 8116.382 - 8166.794: 88.9821% ( 103) 00:07:04.727 8166.794 - 8217.206: 89.3497% ( 64) 00:07:04.727 8217.206 - 8267.618: 89.7633% ( 72) 00:07:04.727 8267.618 - 8318.031: 90.1654% ( 70) 00:07:04.727 8318.031 - 8368.443: 90.6020% ( 76) 00:07:04.727 8368.443 - 8418.855: 90.9582% ( 62) 00:07:04.727 8418.855 - 8469.268: 91.2799% ( 56) 00:07:04.727 8469.268 - 8519.680: 91.4982% ( 38) 00:07:04.727 8519.680 - 8570.092: 91.6418% ( 25) 00:07:04.727 8570.092 - 8620.505: 91.8543% ( 37) 00:07:04.727 8620.505 - 8670.917: 92.0324% ( 31) 00:07:04.727 8670.917 - 8721.329: 92.2277% ( 34) 00:07:04.727 8721.329 - 8771.742: 92.3828% ( 27) 00:07:04.727 8771.742 - 8822.154: 92.5494% ( 29) 00:07:04.727 8822.154 - 8872.566: 92.6643% ( 20) 00:07:04.727 8872.566 - 8922.978: 92.8366% ( 30) 00:07:04.727 8922.978 - 8973.391: 92.9975% ( 28) 00:07:04.727 8973.391 - 9023.803: 93.1296% ( 23) 00:07:04.727 9023.803 - 9074.215: 93.2790% ( 26) 00:07:04.727 9074.215 - 9124.628: 93.4513% ( 30) 00:07:04.727 9124.628 - 9175.040: 93.5834% ( 23) 00:07:04.727 9175.040 - 9225.452: 93.8017% ( 38) 00:07:04.727 9225.452 - 9275.865: 93.8936% ( 16) 00:07:04.727 9275.865 - 9326.277: 93.9625% ( 12) 00:07:04.727 9326.277 - 9376.689: 94.0889% ( 22) 00:07:04.727 9376.689 - 9427.102: 94.2613% ( 30) 00:07:04.727 9427.102 - 9477.514: 94.4106% ( 26) 00:07:04.727 9477.514 - 9527.926: 94.5255% ( 20) 00:07:04.727 9527.926 - 9578.338: 94.6576% ( 23) 00:07:04.727 9578.338 - 9628.751: 94.8012% ( 25) 00:07:04.727 9628.751 - 9679.163: 94.8932% ( 16) 00:07:04.727 9679.163 - 9729.575: 95.1287% ( 41) 00:07:04.727 9729.575 - 9779.988: 95.3182% ( 33) 00:07:04.727 9779.988 - 9830.400: 95.5021% ( 32) 00:07:04.727 9830.400 - 9880.812: 95.6801% ( 31) 00:07:04.727 9880.812 - 9931.225: 95.8123% ( 23) 00:07:04.727 9931.225 - 9981.637: 95.9789% ( 29) 00:07:04.727 9981.637 - 10032.049: 96.1225% ( 25) 00:07:04.727 10032.049 - 10082.462: 96.2374% ( 20) 00:07:04.727 10082.462 - 10132.874: 96.3120% ( 13) 00:07:04.727 10132.874 - 10183.286: 96.3637% ( 9) 00:07:04.727 10183.286 - 10233.698: 96.4040% ( 7) 00:07:04.727 10233.698 - 10284.111: 96.4786% ( 13) 00:07:04.727 10284.111 - 10334.523: 96.5820% ( 18) 00:07:04.727 10334.523 - 10384.935: 96.6452% ( 11) 00:07:04.727 10384.935 - 10435.348: 96.7142% ( 12) 00:07:04.727 10435.348 - 10485.760: 96.8118% ( 17) 00:07:04.727 10485.760 - 10536.172: 96.8750% ( 11) 00:07:04.727 10536.172 - 10586.585: 96.9497% ( 13) 00:07:04.727 10586.585 - 10636.997: 97.0186% ( 12) 00:07:04.727 10636.997 - 10687.409: 97.0703% ( 9) 00:07:04.727 10687.409 - 10737.822: 97.1105% ( 7) 00:07:04.727 10737.822 - 10788.234: 97.1450% ( 6) 00:07:04.727 10788.234 - 10838.646: 97.1909% ( 8) 00:07:04.727 10838.646 - 10889.058: 97.2369% ( 8) 00:07:04.727 10889.058 - 10939.471: 97.2943% ( 10) 00:07:04.727 10939.471 - 10989.883: 97.3460% ( 9) 00:07:04.727 10989.883 - 11040.295: 97.3920% ( 8) 00:07:04.727 11040.295 - 11090.708: 97.4494% ( 10) 00:07:04.727 11090.708 - 11141.120: 97.4782% ( 5) 00:07:04.727 11141.120 - 11191.532: 97.5069% ( 5) 00:07:04.727 11191.532 - 11241.945: 97.5356% ( 5) 00:07:04.727 11241.945 - 11292.357: 97.5701% ( 6) 00:07:04.727 11292.357 - 11342.769: 97.5931% ( 4) 00:07:04.727 11342.769 - 11393.182: 97.6160% ( 4) 00:07:04.727 11393.182 - 11443.594: 97.6562% ( 7) 00:07:04.727 11443.594 - 11494.006: 97.6965% ( 7) 00:07:04.727 11494.006 - 11544.418: 97.7252% ( 5) 00:07:04.727 11544.418 - 11594.831: 97.7654% ( 7) 00:07:04.727 11594.831 - 11645.243: 97.7999% ( 6) 00:07:04.727 11645.243 - 11695.655: 97.8286% ( 5) 00:07:04.727 11695.655 - 11746.068: 97.8631% ( 6) 00:07:04.727 11746.068 - 11796.480: 97.8975% ( 6) 00:07:04.727 11796.480 - 11846.892: 97.9262% ( 5) 00:07:04.727 11846.892 - 11897.305: 97.9435% ( 3) 00:07:04.727 11897.305 - 11947.717: 97.9607% ( 3) 00:07:04.727 11947.717 - 11998.129: 97.9779% ( 3) 00:07:04.727 11998.129 - 12048.542: 97.9952% ( 3) 00:07:04.727 12048.542 - 12098.954: 98.0124% ( 3) 00:07:04.727 12098.954 - 12149.366: 98.1273% ( 20) 00:07:04.727 12149.366 - 12199.778: 98.1675% ( 7) 00:07:04.727 12199.778 - 12250.191: 98.1962% ( 5) 00:07:04.727 12250.191 - 12300.603: 98.2250% ( 5) 00:07:04.727 12300.603 - 12351.015: 98.2537% ( 5) 00:07:04.727 12351.015 - 12401.428: 98.2881% ( 6) 00:07:04.727 12401.428 - 12451.840: 98.3054% ( 3) 00:07:04.727 12451.840 - 12502.252: 98.3341% ( 5) 00:07:04.727 12502.252 - 12552.665: 98.3743% ( 7) 00:07:04.727 12552.665 - 12603.077: 98.3973% ( 4) 00:07:04.727 12603.077 - 12653.489: 98.4145% ( 3) 00:07:04.727 12653.489 - 12703.902: 98.4260% ( 2) 00:07:04.727 12703.902 - 12754.314: 98.4490% ( 4) 00:07:04.727 12754.314 - 12804.726: 98.4720% ( 4) 00:07:04.727 12804.726 - 12855.138: 98.4835% ( 2) 00:07:04.727 12855.138 - 12905.551: 98.5007% ( 3) 00:07:04.727 12905.551 - 13006.375: 98.5294% ( 5) 00:07:04.727 13006.375 - 13107.200: 98.5352% ( 1) 00:07:04.727 13107.200 - 13208.025: 98.6041% ( 12) 00:07:04.727 13208.025 - 13308.849: 98.7017% ( 17) 00:07:04.727 13308.849 - 13409.674: 98.8109% ( 19) 00:07:04.727 13409.674 - 13510.498: 98.8741% ( 11) 00:07:04.727 13510.498 - 13611.323: 98.9143% ( 7) 00:07:04.727 13611.323 - 13712.148: 98.9488% ( 6) 00:07:04.727 13712.148 - 13812.972: 98.9832% ( 6) 00:07:04.727 13812.972 - 13913.797: 99.0119% ( 5) 00:07:04.727 13913.797 - 14014.622: 99.0349% ( 4) 00:07:04.727 14014.622 - 14115.446: 99.0522% ( 3) 00:07:04.727 14115.446 - 14216.271: 99.0694% ( 3) 00:07:04.727 14216.271 - 14317.095: 99.0924% ( 4) 00:07:04.727 14317.095 - 14417.920: 99.1039% ( 2) 00:07:04.727 14417.920 - 14518.745: 99.1211% ( 3) 00:07:04.727 14518.745 - 14619.569: 99.1383% ( 3) 00:07:04.727 14619.569 - 14720.394: 99.1613% ( 4) 00:07:04.727 14720.394 - 14821.218: 99.1843% ( 4) 00:07:04.727 14821.218 - 14922.043: 99.2015% ( 3) 00:07:04.727 14922.043 - 15022.868: 99.2188% ( 3) 00:07:04.727 15022.868 - 15123.692: 99.2360% ( 3) 00:07:04.727 15123.692 - 15224.517: 99.2532% ( 3) 00:07:04.727 15224.517 - 15325.342: 99.2647% ( 2) 00:07:04.727 25710.277 - 25811.102: 99.3049% ( 7) 00:07:04.727 25811.102 - 26012.751: 99.3796% ( 13) 00:07:04.727 26012.751 - 26214.400: 99.4083% ( 5) 00:07:04.727 26214.400 - 26416.049: 99.4600% ( 9) 00:07:04.727 26416.049 - 26617.698: 99.5002% ( 7) 00:07:04.727 26617.698 - 26819.348: 99.5404% ( 7) 00:07:04.727 26819.348 - 27020.997: 99.5864% ( 8) 00:07:04.727 27020.997 - 27222.646: 99.6266% ( 7) 00:07:04.727 27222.646 - 27424.295: 99.6324% ( 1) 00:07:04.727 30650.683 - 30852.332: 99.6496% ( 3) 00:07:04.727 30852.332 - 31053.982: 99.6955% ( 8) 00:07:04.727 31053.982 - 31255.631: 99.7358% ( 7) 00:07:04.727 31255.631 - 31457.280: 99.7817% ( 8) 00:07:04.727 31457.280 - 31658.929: 99.8277% ( 8) 00:07:04.727 31658.929 - 31860.578: 99.8736% ( 8) 00:07:04.727 31860.578 - 32062.228: 99.9196% ( 8) 00:07:04.727 32062.228 - 32263.877: 99.9598% ( 7) 00:07:04.727 32263.877 - 32465.526: 100.0000% ( 7) 00:07:04.727 00:07:04.727 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:04.727 ============================================================================== 00:07:04.727 Range in us Cumulative IO count 00:07:04.727 5721.797 - 5747.003: 0.0057% ( 1) 00:07:04.727 5747.003 - 5772.209: 0.0115% ( 1) 00:07:04.727 5772.209 - 5797.415: 0.0172% ( 1) 00:07:04.727 5797.415 - 5822.622: 0.0230% ( 1) 00:07:04.727 5822.622 - 5847.828: 0.0287% ( 1) 00:07:04.727 5898.240 - 5923.446: 0.0632% ( 6) 00:07:04.727 5923.446 - 5948.652: 0.1264% ( 11) 00:07:04.727 5948.652 - 5973.858: 0.2298% ( 18) 00:07:04.728 5973.858 - 5999.065: 0.3447% ( 20) 00:07:04.728 5999.065 - 6024.271: 0.4998% ( 27) 00:07:04.728 6024.271 - 6049.477: 0.8042% ( 53) 00:07:04.728 6049.477 - 6074.683: 1.0225% ( 38) 00:07:04.728 6074.683 - 6099.889: 1.2695% ( 43) 00:07:04.728 6099.889 - 6125.095: 1.6716% ( 70) 00:07:04.728 6125.095 - 6150.302: 1.9187% ( 43) 00:07:04.728 6150.302 - 6175.508: 2.2289% ( 54) 00:07:04.728 6175.508 - 6200.714: 2.6252% ( 69) 00:07:04.728 6200.714 - 6225.920: 2.8952% ( 47) 00:07:04.728 6225.920 - 6251.126: 3.1997% ( 53) 00:07:04.728 6251.126 - 6276.332: 3.4409% ( 42) 00:07:04.728 6276.332 - 6301.538: 3.9062% ( 81) 00:07:04.728 6301.538 - 6326.745: 4.3371% ( 75) 00:07:04.728 6326.745 - 6351.951: 4.6186% ( 49) 00:07:04.728 6351.951 - 6377.157: 4.9345% ( 55) 00:07:04.728 6377.157 - 6402.363: 5.3653% ( 75) 00:07:04.728 6402.363 - 6427.569: 6.0317% ( 116) 00:07:04.728 6427.569 - 6452.775: 6.8647% ( 145) 00:07:04.728 6452.775 - 6503.188: 9.0935% ( 388) 00:07:04.728 6503.188 - 6553.600: 12.5460% ( 601) 00:07:04.728 6553.600 - 6604.012: 17.5379% ( 869) 00:07:04.728 6604.012 - 6654.425: 22.6562% ( 891) 00:07:04.728 6654.425 - 6704.837: 28.2514% ( 974) 00:07:04.728 6704.837 - 6755.249: 34.4554% ( 1080) 00:07:04.728 6755.249 - 6805.662: 40.1999% ( 1000) 00:07:04.728 6805.662 - 6856.074: 46.2776% ( 1058) 00:07:04.728 6856.074 - 6906.486: 51.6027% ( 927) 00:07:04.728 6906.486 - 6956.898: 56.2500% ( 809) 00:07:04.728 6956.898 - 7007.311: 59.9035% ( 636) 00:07:04.728 7007.311 - 7057.723: 63.0572% ( 549) 00:07:04.728 7057.723 - 7108.135: 65.9869% ( 510) 00:07:04.728 7108.135 - 7158.548: 68.0836% ( 365) 00:07:04.728 7158.548 - 7208.960: 70.1517% ( 360) 00:07:04.728 7208.960 - 7259.372: 71.6337% ( 258) 00:07:04.728 7259.372 - 7309.785: 73.5926% ( 341) 00:07:04.728 7309.785 - 7360.197: 75.1091% ( 264) 00:07:04.728 7360.197 - 7410.609: 76.3729% ( 220) 00:07:04.728 7410.609 - 7461.022: 77.5276% ( 201) 00:07:04.728 7461.022 - 7511.434: 78.5271% ( 174) 00:07:04.728 7511.434 - 7561.846: 80.0896% ( 272) 00:07:04.728 7561.846 - 7612.258: 81.3017% ( 211) 00:07:04.728 7612.258 - 7662.671: 82.1002% ( 139) 00:07:04.728 7662.671 - 7713.083: 82.8470% ( 130) 00:07:04.728 7713.083 - 7763.495: 83.5248% ( 118) 00:07:04.728 7763.495 - 7813.908: 84.0878% ( 98) 00:07:04.728 7813.908 - 7864.320: 84.9265% ( 146) 00:07:04.728 7864.320 - 7914.732: 85.6675% ( 129) 00:07:04.728 7914.732 - 7965.145: 86.2822% ( 107) 00:07:04.728 7965.145 - 8015.557: 86.9830% ( 122) 00:07:04.728 8015.557 - 8065.969: 87.6608% ( 118) 00:07:04.728 8065.969 - 8116.382: 88.4995% ( 146) 00:07:04.728 8116.382 - 8166.794: 89.0051% ( 88) 00:07:04.728 8166.794 - 8217.206: 89.3382% ( 58) 00:07:04.728 8217.206 - 8267.618: 89.6369% ( 52) 00:07:04.728 8267.618 - 8318.031: 90.1195% ( 84) 00:07:04.728 8318.031 - 8368.443: 90.4010% ( 49) 00:07:04.728 8368.443 - 8418.855: 90.6939% ( 51) 00:07:04.728 8418.855 - 8469.268: 90.9524% ( 45) 00:07:04.728 8469.268 - 8519.680: 91.1477% ( 34) 00:07:04.728 8519.680 - 8570.092: 91.4465% ( 52) 00:07:04.728 8570.092 - 8620.505: 91.6820% ( 41) 00:07:04.728 8620.505 - 8670.917: 91.9290% ( 43) 00:07:04.728 8670.917 - 8721.329: 92.2564% ( 57) 00:07:04.728 8721.329 - 8771.742: 92.4862% ( 40) 00:07:04.728 8771.742 - 8822.154: 92.7160% ( 40) 00:07:04.728 8822.154 - 8872.566: 92.9228% ( 36) 00:07:04.728 8872.566 - 8922.978: 93.1928% ( 47) 00:07:04.728 8922.978 - 8973.391: 93.4455% ( 44) 00:07:04.728 8973.391 - 9023.803: 93.5777% ( 23) 00:07:04.728 9023.803 - 9074.215: 93.7040% ( 22) 00:07:04.728 9074.215 - 9124.628: 93.8821% ( 31) 00:07:04.728 9124.628 - 9175.040: 94.0545% ( 30) 00:07:04.728 9175.040 - 9225.452: 94.2325% ( 31) 00:07:04.728 9225.452 - 9275.865: 94.3417% ( 19) 00:07:04.728 9275.865 - 9326.277: 94.4566% ( 20) 00:07:04.728 9326.277 - 9376.689: 94.6978% ( 42) 00:07:04.728 9376.689 - 9427.102: 94.8932% ( 34) 00:07:04.728 9427.102 - 9477.514: 94.9908% ( 17) 00:07:04.728 9477.514 - 9527.926: 95.2091% ( 38) 00:07:04.728 9527.926 - 9578.338: 95.3182% ( 19) 00:07:04.728 9578.338 - 9628.751: 95.3757% ( 10) 00:07:04.728 9628.751 - 9679.163: 95.4216% ( 8) 00:07:04.728 9679.163 - 9729.575: 95.4791% ( 10) 00:07:04.728 9729.575 - 9779.988: 95.5365% ( 10) 00:07:04.728 9779.988 - 9830.400: 95.5767% ( 7) 00:07:04.728 9830.400 - 9880.812: 95.6284% ( 9) 00:07:04.728 9880.812 - 9931.225: 95.6687% ( 7) 00:07:04.728 9931.225 - 9981.637: 95.7318% ( 11) 00:07:04.728 9981.637 - 10032.049: 95.8352% ( 18) 00:07:04.728 10032.049 - 10082.462: 95.8812% ( 8) 00:07:04.728 10082.462 - 10132.874: 95.9444% ( 11) 00:07:04.728 10132.874 - 10183.286: 96.0133% ( 12) 00:07:04.728 10183.286 - 10233.698: 96.1167% ( 18) 00:07:04.728 10233.698 - 10284.111: 96.2661% ( 26) 00:07:04.728 10284.111 - 10334.523: 96.3752% ( 19) 00:07:04.728 10334.523 - 10384.935: 96.4557% ( 14) 00:07:04.728 10384.935 - 10435.348: 96.6165% ( 28) 00:07:04.728 10435.348 - 10485.760: 96.7659% ( 26) 00:07:04.728 10485.760 - 10536.172: 96.8405% ( 13) 00:07:04.728 10536.172 - 10586.585: 96.9095% ( 12) 00:07:04.728 10586.585 - 10636.997: 96.9841% ( 13) 00:07:04.728 10636.997 - 10687.409: 97.0531% ( 12) 00:07:04.728 10687.409 - 10737.822: 97.1450% ( 16) 00:07:04.728 10737.822 - 10788.234: 97.2369% ( 16) 00:07:04.728 10788.234 - 10838.646: 97.3288% ( 16) 00:07:04.728 10838.646 - 10889.058: 97.3920% ( 11) 00:07:04.728 10889.058 - 10939.471: 97.4207% ( 5) 00:07:04.728 10939.471 - 10989.883: 97.4265% ( 1) 00:07:04.728 11141.120 - 11191.532: 97.4380% ( 2) 00:07:04.728 11191.532 - 11241.945: 97.4954% ( 10) 00:07:04.728 11241.945 - 11292.357: 97.5241% ( 5) 00:07:04.728 11292.357 - 11342.769: 97.5414% ( 3) 00:07:04.728 11342.769 - 11393.182: 97.5528% ( 2) 00:07:04.728 11393.182 - 11443.594: 97.5758% ( 4) 00:07:04.728 11443.594 - 11494.006: 97.6103% ( 6) 00:07:04.728 11494.006 - 11544.418: 97.6562% ( 8) 00:07:04.728 11544.418 - 11594.831: 97.6907% ( 6) 00:07:04.728 11594.831 - 11645.243: 97.7367% ( 8) 00:07:04.728 11645.243 - 11695.655: 97.7711% ( 6) 00:07:04.728 11695.655 - 11746.068: 97.8171% ( 8) 00:07:04.728 11746.068 - 11796.480: 97.8516% ( 6) 00:07:04.728 11796.480 - 11846.892: 97.8975% ( 8) 00:07:04.728 11846.892 - 11897.305: 97.9377% ( 7) 00:07:04.728 11897.305 - 11947.717: 97.9779% ( 7) 00:07:04.728 11947.717 - 11998.129: 98.0182% ( 7) 00:07:04.728 11998.129 - 12048.542: 98.0411% ( 4) 00:07:04.728 12048.542 - 12098.954: 98.0641% ( 4) 00:07:04.728 12098.954 - 12149.366: 98.0813% ( 3) 00:07:04.728 12149.366 - 12199.778: 98.0986% ( 3) 00:07:04.728 12199.778 - 12250.191: 98.1216% ( 4) 00:07:04.728 12250.191 - 12300.603: 98.1388% ( 3) 00:07:04.728 12300.603 - 12351.015: 98.1618% ( 4) 00:07:04.728 12351.015 - 12401.428: 98.1675% ( 1) 00:07:04.728 12401.428 - 12451.840: 98.2364% ( 12) 00:07:04.728 12451.840 - 12502.252: 98.3398% ( 18) 00:07:04.728 12502.252 - 12552.665: 98.3858% ( 8) 00:07:04.728 12552.665 - 12603.077: 98.3973% ( 2) 00:07:04.728 12603.077 - 12653.489: 98.4088% ( 2) 00:07:04.728 12653.489 - 12703.902: 98.4203% ( 2) 00:07:04.728 12703.902 - 12754.314: 98.4375% ( 3) 00:07:04.728 12754.314 - 12804.726: 98.4490% ( 2) 00:07:04.728 12804.726 - 12855.138: 98.4605% ( 2) 00:07:04.728 12855.138 - 12905.551: 98.4777% ( 3) 00:07:04.728 12905.551 - 13006.375: 98.5064% ( 5) 00:07:04.729 13006.375 - 13107.200: 98.5869% ( 14) 00:07:04.729 13107.200 - 13208.025: 98.8339% ( 43) 00:07:04.729 13208.025 - 13308.849: 98.8683% ( 6) 00:07:04.729 13308.849 - 13409.674: 98.8971% ( 5) 00:07:04.729 13812.972 - 13913.797: 98.9200% ( 4) 00:07:04.729 13913.797 - 14014.622: 98.9373% ( 3) 00:07:04.729 14014.622 - 14115.446: 98.9602% ( 4) 00:07:04.729 14115.446 - 14216.271: 98.9775% ( 3) 00:07:04.729 14216.271 - 14317.095: 99.0005% ( 4) 00:07:04.729 14317.095 - 14417.920: 99.0177% ( 3) 00:07:04.729 14417.920 - 14518.745: 99.0407% ( 4) 00:07:04.729 14518.745 - 14619.569: 99.0579% ( 3) 00:07:04.729 14619.569 - 14720.394: 99.0809% ( 4) 00:07:04.729 14720.394 - 14821.218: 99.0981% ( 3) 00:07:04.729 14821.218 - 14922.043: 99.1153% ( 3) 00:07:04.729 14922.043 - 15022.868: 99.1383% ( 4) 00:07:04.729 15022.868 - 15123.692: 99.1613% ( 4) 00:07:04.729 15123.692 - 15224.517: 99.1843% ( 4) 00:07:04.729 15224.517 - 15325.342: 99.2073% ( 4) 00:07:04.729 15325.342 - 15426.166: 99.2302% ( 4) 00:07:04.729 15426.166 - 15526.991: 99.2532% ( 4) 00:07:04.729 15526.991 - 15627.815: 99.2647% ( 2) 00:07:04.729 23895.434 - 23996.258: 99.2877% ( 4) 00:07:04.729 23996.258 - 24097.083: 99.3107% ( 4) 00:07:04.729 24097.083 - 24197.908: 99.3336% ( 4) 00:07:04.729 24197.908 - 24298.732: 99.3566% ( 4) 00:07:04.729 24298.732 - 24399.557: 99.3796% ( 4) 00:07:04.729 24399.557 - 24500.382: 99.4083% ( 5) 00:07:04.729 24500.382 - 24601.206: 99.4313% ( 4) 00:07:04.729 24601.206 - 24702.031: 99.4543% ( 4) 00:07:04.729 24702.031 - 24802.855: 99.4830% ( 5) 00:07:04.729 24802.855 - 24903.680: 99.5060% ( 4) 00:07:04.729 24903.680 - 25004.505: 99.5290% ( 4) 00:07:04.729 25004.505 - 25105.329: 99.5519% ( 4) 00:07:04.729 25105.329 - 25206.154: 99.5749% ( 4) 00:07:04.729 25206.154 - 25306.978: 99.6036% ( 5) 00:07:04.729 25306.978 - 25407.803: 99.6266% ( 4) 00:07:04.729 25407.803 - 25508.628: 99.6324% ( 1) 00:07:04.729 29037.489 - 29239.138: 99.6726% ( 7) 00:07:04.729 29239.138 - 29440.788: 99.7185% ( 8) 00:07:04.729 29440.788 - 29642.437: 99.7645% ( 8) 00:07:04.729 29642.437 - 29844.086: 99.8162% ( 9) 00:07:04.729 29844.086 - 30045.735: 99.8621% ( 8) 00:07:04.729 30045.735 - 30247.385: 99.9081% ( 8) 00:07:04.729 30247.385 - 30449.034: 99.9598% ( 9) 00:07:04.729 30449.034 - 30650.683: 100.0000% ( 7) 00:07:04.729 00:07:04.729 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:04.729 ============================================================================== 00:07:04.729 Range in us Cumulative IO count 00:07:04.729 5721.797 - 5747.003: 0.0172% ( 3) 00:07:04.729 5747.003 - 5772.209: 0.0230% ( 1) 00:07:04.729 5772.209 - 5797.415: 0.0402% ( 3) 00:07:04.729 5797.415 - 5822.622: 0.0632% ( 4) 00:07:04.729 5822.622 - 5847.828: 0.0862% ( 4) 00:07:04.729 5847.828 - 5873.034: 0.1321% ( 8) 00:07:04.729 5873.034 - 5898.240: 0.2355% ( 18) 00:07:04.729 5898.240 - 5923.446: 0.3217% ( 15) 00:07:04.729 5923.446 - 5948.652: 0.3504% ( 5) 00:07:04.729 5948.652 - 5973.858: 0.3964% ( 8) 00:07:04.729 5973.858 - 5999.065: 0.5974% ( 35) 00:07:04.729 5999.065 - 6024.271: 0.6951% ( 17) 00:07:04.729 6024.271 - 6049.477: 0.7927% ( 17) 00:07:04.729 6049.477 - 6074.683: 0.8904% ( 17) 00:07:04.729 6074.683 - 6099.889: 1.1259% ( 41) 00:07:04.729 6099.889 - 6125.095: 1.4419% ( 55) 00:07:04.729 6125.095 - 6150.302: 1.5740% ( 23) 00:07:04.729 6150.302 - 6175.508: 1.7004% ( 22) 00:07:04.729 6175.508 - 6200.714: 1.9933% ( 51) 00:07:04.729 6200.714 - 6225.920: 2.1829% ( 33) 00:07:04.729 6225.920 - 6251.126: 2.3840% ( 35) 00:07:04.729 6251.126 - 6276.332: 2.6310% ( 43) 00:07:04.729 6276.332 - 6301.538: 3.1882% ( 97) 00:07:04.729 6301.538 - 6326.745: 3.6822% ( 86) 00:07:04.729 6326.745 - 6351.951: 4.2394% ( 97) 00:07:04.729 6351.951 - 6377.157: 5.0724% ( 145) 00:07:04.729 6377.157 - 6402.363: 5.8766% ( 140) 00:07:04.729 6402.363 - 6427.569: 6.5545% ( 118) 00:07:04.729 6427.569 - 6452.775: 7.7321% ( 205) 00:07:04.729 6452.775 - 6503.188: 10.5296% ( 487) 00:07:04.729 6503.188 - 6553.600: 13.6949% ( 551) 00:07:04.729 6553.600 - 6604.012: 17.8309% ( 720) 00:07:04.729 6604.012 - 6654.425: 24.2819% ( 1123) 00:07:04.729 6654.425 - 6704.837: 29.3945% ( 890) 00:07:04.729 6704.837 - 6755.249: 35.0011% ( 976) 00:07:04.729 6755.249 - 6805.662: 41.3258% ( 1101) 00:07:04.729 6805.662 - 6856.074: 47.0358% ( 994) 00:07:04.729 6856.074 - 6906.486: 51.2466% ( 733) 00:07:04.729 6906.486 - 6956.898: 55.3998% ( 723) 00:07:04.729 6956.898 - 7007.311: 59.6507% ( 740) 00:07:04.729 7007.311 - 7057.723: 62.7757% ( 544) 00:07:04.729 7057.723 - 7108.135: 65.3608% ( 450) 00:07:04.729 7108.135 - 7158.548: 67.4115% ( 357) 00:07:04.729 7158.548 - 7208.960: 69.4221% ( 350) 00:07:04.729 7208.960 - 7259.372: 71.1684% ( 304) 00:07:04.729 7259.372 - 7309.785: 73.0871% ( 334) 00:07:04.729 7309.785 - 7360.197: 74.6955% ( 280) 00:07:04.729 7360.197 - 7410.609: 76.5395% ( 321) 00:07:04.729 7410.609 - 7461.022: 77.6769% ( 198) 00:07:04.729 7461.022 - 7511.434: 78.9235% ( 217) 00:07:04.729 7511.434 - 7561.846: 80.1585% ( 215) 00:07:04.729 7561.846 - 7612.258: 81.1811% ( 178) 00:07:04.729 7612.258 - 7662.671: 81.8589% ( 118) 00:07:04.729 7662.671 - 7713.083: 82.5138% ( 114) 00:07:04.729 7713.083 - 7763.495: 83.2433% ( 127) 00:07:04.729 7763.495 - 7813.908: 84.0016% ( 132) 00:07:04.729 7813.908 - 7864.320: 84.6737% ( 117) 00:07:04.729 7864.320 - 7914.732: 85.3114% ( 111) 00:07:04.729 7914.732 - 7965.145: 86.1845% ( 152) 00:07:04.729 7965.145 - 8015.557: 86.8451% ( 115) 00:07:04.729 8015.557 - 8065.969: 87.5287% ( 119) 00:07:04.729 8065.969 - 8116.382: 88.0687% ( 94) 00:07:04.729 8116.382 - 8166.794: 88.4995% ( 75) 00:07:04.729 8166.794 - 8217.206: 88.8097% ( 54) 00:07:04.729 8217.206 - 8267.618: 89.1372% ( 57) 00:07:04.729 8267.618 - 8318.031: 89.5393% ( 70) 00:07:04.729 8318.031 - 8368.443: 89.9414% ( 70) 00:07:04.729 8368.443 - 8418.855: 90.4182% ( 83) 00:07:04.729 8418.855 - 8469.268: 90.8318% ( 72) 00:07:04.729 8469.268 - 8519.680: 91.2626% ( 75) 00:07:04.729 8519.680 - 8570.092: 91.6360% ( 65) 00:07:04.729 8570.092 - 8620.505: 91.9347% ( 52) 00:07:04.729 8620.505 - 8670.917: 92.1645% ( 40) 00:07:04.729 8670.917 - 8721.329: 92.5264% ( 63) 00:07:04.729 8721.329 - 8771.742: 92.7619% ( 41) 00:07:04.729 8771.742 - 8822.154: 92.9573% ( 34) 00:07:04.729 8822.154 - 8872.566: 93.1928% ( 41) 00:07:04.729 8872.566 - 8922.978: 93.4168% ( 39) 00:07:04.729 8922.978 - 8973.391: 93.6236% ( 36) 00:07:04.729 8973.391 - 9023.803: 93.9281% ( 53) 00:07:04.729 9023.803 - 9074.215: 94.1464% ( 38) 00:07:04.729 9074.215 - 9124.628: 94.2842% ( 24) 00:07:04.729 9124.628 - 9175.040: 94.3991% ( 20) 00:07:04.729 9175.040 - 9225.452: 94.5370% ( 24) 00:07:04.729 9225.452 - 9275.865: 94.6347% ( 17) 00:07:04.729 9275.865 - 9326.277: 94.7208% ( 15) 00:07:04.729 9326.277 - 9376.689: 94.8874% ( 29) 00:07:04.729 9376.689 - 9427.102: 94.9391% ( 9) 00:07:04.729 9427.102 - 9477.514: 94.9908% ( 9) 00:07:04.729 9477.514 - 9527.926: 95.0310% ( 7) 00:07:04.729 9527.926 - 9578.338: 95.0597% ( 5) 00:07:04.729 9578.338 - 9628.751: 95.0885% ( 5) 00:07:04.729 9628.751 - 9679.163: 95.1229% ( 6) 00:07:04.729 9679.163 - 9729.575: 95.1574% ( 6) 00:07:04.729 9729.575 - 9779.988: 95.1804% ( 4) 00:07:04.729 9779.988 - 9830.400: 95.2034% ( 4) 00:07:04.729 9830.400 - 9880.812: 95.2321% ( 5) 00:07:04.729 9931.225 - 9981.637: 95.2608% ( 5) 00:07:04.729 9981.637 - 10032.049: 95.3527% ( 16) 00:07:04.729 10032.049 - 10082.462: 95.4791% ( 22) 00:07:04.729 10082.462 - 10132.874: 95.6112% ( 23) 00:07:04.729 10132.874 - 10183.286: 95.7146% ( 18) 00:07:04.729 10183.286 - 10233.698: 95.8927% ( 31) 00:07:04.729 10233.698 - 10284.111: 95.9789% ( 15) 00:07:04.729 10284.111 - 10334.523: 96.0823% ( 18) 00:07:04.729 10334.523 - 10384.935: 96.1972% ( 20) 00:07:04.729 10384.935 - 10435.348: 96.3580% ( 28) 00:07:04.729 10435.348 - 10485.760: 96.5016% ( 25) 00:07:04.729 10485.760 - 10536.172: 96.6050% ( 18) 00:07:04.729 10536.172 - 10586.585: 96.7027% ( 17) 00:07:04.729 10586.585 - 10636.997: 96.8348% ( 23) 00:07:04.729 10636.997 - 10687.409: 96.9037% ( 12) 00:07:04.729 10687.409 - 10737.822: 96.9554% ( 9) 00:07:04.729 10737.822 - 10788.234: 97.0244% ( 12) 00:07:04.729 10788.234 - 10838.646: 97.0818% ( 10) 00:07:04.729 10838.646 - 10889.058: 97.1737% ( 16) 00:07:04.729 10889.058 - 10939.471: 97.2771% ( 18) 00:07:04.729 10939.471 - 10989.883: 97.3460% ( 12) 00:07:04.729 10989.883 - 11040.295: 97.4150% ( 12) 00:07:04.729 11040.295 - 11090.708: 97.4609% ( 8) 00:07:04.729 11090.708 - 11141.120: 97.5241% ( 11) 00:07:04.729 11141.120 - 11191.532: 97.5931% ( 12) 00:07:04.729 11191.532 - 11241.945: 97.6562% ( 11) 00:07:04.729 11241.945 - 11292.357: 97.7080% ( 9) 00:07:04.730 11292.357 - 11342.769: 97.7539% ( 8) 00:07:04.730 11342.769 - 11393.182: 97.8056% ( 9) 00:07:04.730 11393.182 - 11443.594: 97.8516% ( 8) 00:07:04.730 11443.594 - 11494.006: 97.8918% ( 7) 00:07:04.730 11494.006 - 11544.418: 97.9377% ( 8) 00:07:04.730 11544.418 - 11594.831: 97.9722% ( 6) 00:07:04.730 11594.831 - 11645.243: 97.9894% ( 3) 00:07:04.730 11645.243 - 11695.655: 98.0067% ( 3) 00:07:04.730 11695.655 - 11746.068: 98.0239% ( 3) 00:07:04.730 11746.068 - 11796.480: 98.0411% ( 3) 00:07:04.730 11796.480 - 11846.892: 98.0584% ( 3) 00:07:04.730 11846.892 - 11897.305: 98.0756% ( 3) 00:07:04.730 11897.305 - 11947.717: 98.0928% ( 3) 00:07:04.730 11947.717 - 11998.129: 98.1043% ( 2) 00:07:04.730 11998.129 - 12048.542: 98.1216% ( 3) 00:07:04.730 12048.542 - 12098.954: 98.1388% ( 3) 00:07:04.730 12098.954 - 12149.366: 98.1675% ( 5) 00:07:04.730 12149.366 - 12199.778: 98.1847% ( 3) 00:07:04.730 12199.778 - 12250.191: 98.1962% ( 2) 00:07:04.730 12250.191 - 12300.603: 98.2077% ( 2) 00:07:04.730 12300.603 - 12351.015: 98.2192% ( 2) 00:07:04.730 12351.015 - 12401.428: 98.2537% ( 6) 00:07:04.730 12401.428 - 12451.840: 98.2824% ( 5) 00:07:04.730 12451.840 - 12502.252: 98.3226% ( 7) 00:07:04.730 12502.252 - 12552.665: 98.3571% ( 6) 00:07:04.730 12552.665 - 12603.077: 98.4088% ( 9) 00:07:04.730 12603.077 - 12653.489: 98.4720% ( 11) 00:07:04.730 12653.489 - 12703.902: 98.5179% ( 8) 00:07:04.730 12703.902 - 12754.314: 98.5466% ( 5) 00:07:04.730 12754.314 - 12804.726: 98.5639% ( 3) 00:07:04.730 12804.726 - 12855.138: 98.5811% ( 3) 00:07:04.730 12855.138 - 12905.551: 98.6041% ( 4) 00:07:04.730 12905.551 - 13006.375: 98.6386% ( 6) 00:07:04.730 13006.375 - 13107.200: 98.6788% ( 7) 00:07:04.730 13107.200 - 13208.025: 98.7190% ( 7) 00:07:04.730 13208.025 - 13308.849: 98.7649% ( 8) 00:07:04.730 13308.849 - 13409.674: 98.7994% ( 6) 00:07:04.730 13409.674 - 13510.498: 98.8568% ( 10) 00:07:04.730 13510.498 - 13611.323: 98.8971% ( 7) 00:07:04.730 13611.323 - 13712.148: 98.9430% ( 8) 00:07:04.730 13712.148 - 13812.972: 98.9832% ( 7) 00:07:04.730 13812.972 - 13913.797: 99.0062% ( 4) 00:07:04.730 13913.797 - 14014.622: 99.0234% ( 3) 00:07:04.730 14014.622 - 14115.446: 99.0464% ( 4) 00:07:04.730 14115.446 - 14216.271: 99.0694% ( 4) 00:07:04.730 14216.271 - 14317.095: 99.0924% ( 4) 00:07:04.730 14317.095 - 14417.920: 99.1153% ( 4) 00:07:04.730 14417.920 - 14518.745: 99.1383% ( 4) 00:07:04.730 14518.745 - 14619.569: 99.1613% ( 4) 00:07:04.730 14619.569 - 14720.394: 99.1900% ( 5) 00:07:04.730 14720.394 - 14821.218: 99.2130% ( 4) 00:07:04.730 14821.218 - 14922.043: 99.2360% ( 4) 00:07:04.730 14922.043 - 15022.868: 99.2590% ( 4) 00:07:04.730 15022.868 - 15123.692: 99.2647% ( 1) 00:07:04.730 22685.538 - 22786.363: 99.2705% ( 1) 00:07:04.730 22786.363 - 22887.188: 99.2877% ( 3) 00:07:04.730 22887.188 - 22988.012: 99.2992% ( 2) 00:07:04.730 22988.012 - 23088.837: 99.3222% ( 4) 00:07:04.730 23088.837 - 23189.662: 99.3451% ( 4) 00:07:04.730 23189.662 - 23290.486: 99.3681% ( 4) 00:07:04.730 23290.486 - 23391.311: 99.3968% ( 5) 00:07:04.730 23391.311 - 23492.135: 99.4198% ( 4) 00:07:04.730 23492.135 - 23592.960: 99.4370% ( 3) 00:07:04.730 23592.960 - 23693.785: 99.4600% ( 4) 00:07:04.730 23693.785 - 23794.609: 99.4887% ( 5) 00:07:04.730 23794.609 - 23895.434: 99.5117% ( 4) 00:07:04.730 23895.434 - 23996.258: 99.5347% ( 4) 00:07:04.730 23996.258 - 24097.083: 99.5634% ( 5) 00:07:04.730 24097.083 - 24197.908: 99.5864% ( 4) 00:07:04.730 24197.908 - 24298.732: 99.6094% ( 4) 00:07:04.730 24298.732 - 24399.557: 99.6324% ( 4) 00:07:04.730 27424.295 - 27625.945: 99.6726% ( 7) 00:07:04.730 27625.945 - 27827.594: 99.7128% ( 7) 00:07:04.730 27827.594 - 28029.243: 99.7587% ( 8) 00:07:04.730 28029.243 - 28230.892: 99.7702% ( 2) 00:07:04.730 28432.542 - 28634.191: 99.8219% ( 9) 00:07:04.730 28634.191 - 28835.840: 99.8679% ( 8) 00:07:04.730 28835.840 - 29037.489: 99.9196% ( 9) 00:07:04.730 29037.489 - 29239.138: 99.9655% ( 8) 00:07:04.730 29239.138 - 29440.788: 100.0000% ( 6) 00:07:04.730 00:07:04.730 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:04.730 ============================================================================== 00:07:04.730 Range in us Cumulative IO count 00:07:04.730 5671.385 - 5696.591: 0.0057% ( 1) 00:07:04.730 5822.622 - 5847.828: 0.0172% ( 2) 00:07:04.730 5847.828 - 5873.034: 0.0402% ( 4) 00:07:04.730 5873.034 - 5898.240: 0.1091% ( 12) 00:07:04.730 5898.240 - 5923.446: 0.1494% ( 7) 00:07:04.730 5923.446 - 5948.652: 0.2068% ( 10) 00:07:04.730 5948.652 - 5973.858: 0.3217% ( 20) 00:07:04.730 5973.858 - 5999.065: 0.4538% ( 23) 00:07:04.730 5999.065 - 6024.271: 0.5802% ( 22) 00:07:04.730 6024.271 - 6049.477: 0.8330% ( 44) 00:07:04.730 6049.477 - 6074.683: 0.9708% ( 24) 00:07:04.730 6074.683 - 6099.889: 1.0742% ( 18) 00:07:04.730 6099.889 - 6125.095: 1.1834% ( 19) 00:07:04.730 6125.095 - 6150.302: 1.4534% ( 47) 00:07:04.730 6150.302 - 6175.508: 1.6659% ( 37) 00:07:04.730 6175.508 - 6200.714: 1.9301% ( 46) 00:07:04.730 6200.714 - 6225.920: 2.1599% ( 40) 00:07:04.730 6225.920 - 6251.126: 2.5161% ( 62) 00:07:04.730 6251.126 - 6276.332: 2.8033% ( 50) 00:07:04.730 6276.332 - 6301.538: 3.3031% ( 87) 00:07:04.730 6301.538 - 6326.745: 3.7971% ( 86) 00:07:04.730 6326.745 - 6351.951: 4.3773% ( 101) 00:07:04.730 6351.951 - 6377.157: 4.9403% ( 98) 00:07:04.730 6377.157 - 6402.363: 5.6296% ( 120) 00:07:04.730 6402.363 - 6427.569: 6.5659% ( 163) 00:07:04.730 6427.569 - 6452.775: 7.6746% ( 193) 00:07:04.730 6452.775 - 6503.188: 9.8460% ( 378) 00:07:04.730 6503.188 - 6553.600: 12.9308% ( 537) 00:07:04.730 6553.600 - 6604.012: 17.1818% ( 740) 00:07:04.730 6604.012 - 6654.425: 22.5816% ( 940) 00:07:04.730 6654.425 - 6704.837: 28.9120% ( 1102) 00:07:04.730 6704.837 - 6755.249: 34.6852% ( 1005) 00:07:04.730 6755.249 - 6805.662: 40.9409% ( 1089) 00:07:04.730 6805.662 - 6856.074: 46.9324% ( 1043) 00:07:04.730 6856.074 - 6906.486: 50.8559% ( 683) 00:07:04.730 6906.486 - 6956.898: 55.8421% ( 868) 00:07:04.730 6956.898 - 7007.311: 59.6220% ( 658) 00:07:04.730 7007.311 - 7057.723: 62.9825% ( 585) 00:07:04.730 7057.723 - 7108.135: 65.9524% ( 517) 00:07:04.730 7108.135 - 7158.548: 68.0836% ( 371) 00:07:04.730 7158.548 - 7208.960: 69.9793% ( 330) 00:07:04.730 7208.960 - 7259.372: 71.5361% ( 271) 00:07:04.730 7259.372 - 7309.785: 73.1847% ( 287) 00:07:04.730 7309.785 - 7360.197: 74.9598% ( 309) 00:07:04.730 7360.197 - 7410.609: 76.5223% ( 272) 00:07:04.730 7410.609 - 7461.022: 77.9469% ( 248) 00:07:04.730 7461.022 - 7511.434: 78.8718% ( 161) 00:07:04.730 7511.434 - 7561.846: 80.0896% ( 212) 00:07:04.730 7561.846 - 7612.258: 81.2385% ( 200) 00:07:04.730 7612.258 - 7662.671: 82.2610% ( 178) 00:07:04.730 7662.671 - 7713.083: 82.9676% ( 123) 00:07:04.730 7713.083 - 7763.495: 83.8637% ( 156) 00:07:04.730 7763.495 - 7813.908: 84.5531% ( 120) 00:07:04.730 7813.908 - 7864.320: 85.1850% ( 110) 00:07:04.730 7864.320 - 7914.732: 85.9777% ( 138) 00:07:04.730 7914.732 - 7965.145: 86.5234% ( 95) 00:07:04.730 7965.145 - 8015.557: 87.0404% ( 90) 00:07:04.730 8015.557 - 8065.969: 87.6149% ( 100) 00:07:04.730 8065.969 - 8116.382: 88.1319% ( 90) 00:07:04.730 8116.382 - 8166.794: 88.5972% ( 81) 00:07:04.731 8166.794 - 8217.206: 89.0338% ( 76) 00:07:04.731 8217.206 - 8267.618: 89.4876% ( 79) 00:07:04.731 8267.618 - 8318.031: 89.9644% ( 83) 00:07:04.731 8318.031 - 8368.443: 90.3091% ( 60) 00:07:04.731 8368.443 - 8418.855: 90.5676% ( 45) 00:07:04.731 8418.855 - 8469.268: 90.7801% ( 37) 00:07:04.731 8469.268 - 8519.680: 91.0788% ( 52) 00:07:04.731 8519.680 - 8570.092: 91.4350% ( 62) 00:07:04.731 8570.092 - 8620.505: 91.6992% ( 46) 00:07:04.731 8620.505 - 8670.917: 92.0496% ( 61) 00:07:04.731 8670.917 - 8721.329: 92.3081% ( 45) 00:07:04.731 8721.329 - 8771.742: 92.5609% ( 44) 00:07:04.731 8771.742 - 8822.154: 92.8366% ( 48) 00:07:04.731 8822.154 - 8872.566: 93.1181% ( 49) 00:07:04.731 8872.566 - 8922.978: 93.3364% ( 38) 00:07:04.731 8922.978 - 8973.391: 93.7270% ( 68) 00:07:04.731 8973.391 - 9023.803: 93.9740% ( 43) 00:07:04.731 9023.803 - 9074.215: 94.1751% ( 35) 00:07:04.731 9074.215 - 9124.628: 94.3532% ( 31) 00:07:04.731 9124.628 - 9175.040: 94.5427% ( 33) 00:07:04.731 9175.040 - 9225.452: 94.7093% ( 29) 00:07:04.731 9225.452 - 9275.865: 94.8127% ( 18) 00:07:04.731 9275.865 - 9326.277: 94.8932% ( 14) 00:07:04.731 9326.277 - 9376.689: 94.9506% ( 10) 00:07:04.731 9376.689 - 9427.102: 94.9966% ( 8) 00:07:04.731 9427.102 - 9477.514: 95.0310% ( 6) 00:07:04.731 9477.514 - 9527.926: 95.0540% ( 4) 00:07:04.731 9527.926 - 9578.338: 95.0885% ( 6) 00:07:04.731 9578.338 - 9628.751: 95.0942% ( 1) 00:07:04.731 9628.751 - 9679.163: 95.1057% ( 2) 00:07:04.731 9679.163 - 9729.575: 95.1344% ( 5) 00:07:04.731 9729.575 - 9779.988: 95.1804% ( 8) 00:07:04.731 9779.988 - 9830.400: 95.2378% ( 10) 00:07:04.731 9830.400 - 9880.812: 95.3585% ( 21) 00:07:04.731 9880.812 - 9931.225: 95.3987% ( 7) 00:07:04.731 9931.225 - 9981.637: 95.4216% ( 4) 00:07:04.731 9981.637 - 10032.049: 95.4504% ( 5) 00:07:04.731 10032.049 - 10082.462: 95.4906% ( 7) 00:07:04.731 10082.462 - 10132.874: 95.5365% ( 8) 00:07:04.731 10132.874 - 10183.286: 95.5940% ( 10) 00:07:04.731 10183.286 - 10233.698: 95.6687% ( 13) 00:07:04.731 10233.698 - 10284.111: 95.7663% ( 17) 00:07:04.731 10284.111 - 10334.523: 95.9903% ( 39) 00:07:04.731 10334.523 - 10384.935: 96.1225% ( 23) 00:07:04.731 10384.935 - 10435.348: 96.2546% ( 23) 00:07:04.731 10435.348 - 10485.760: 96.3523% ( 17) 00:07:04.731 10485.760 - 10536.172: 96.4557% ( 18) 00:07:04.731 10536.172 - 10586.585: 96.6165% ( 28) 00:07:04.731 10586.585 - 10636.997: 96.7371% ( 21) 00:07:04.731 10636.997 - 10687.409: 96.8233% ( 15) 00:07:04.731 10687.409 - 10737.822: 96.9382% ( 20) 00:07:04.731 10737.822 - 10788.234: 97.0416% ( 18) 00:07:04.731 10788.234 - 10838.646: 97.1220% ( 14) 00:07:04.731 10838.646 - 10889.058: 97.1852% ( 11) 00:07:04.731 10889.058 - 10939.471: 97.2484% ( 11) 00:07:04.731 10939.471 - 10989.883: 97.2943% ( 8) 00:07:04.731 10989.883 - 11040.295: 97.3231% ( 5) 00:07:04.731 11040.295 - 11090.708: 97.3460% ( 4) 00:07:04.731 11090.708 - 11141.120: 97.3633% ( 3) 00:07:04.731 11141.120 - 11191.532: 97.3805% ( 3) 00:07:04.731 11191.532 - 11241.945: 97.4035% ( 4) 00:07:04.731 11241.945 - 11292.357: 97.4380% ( 6) 00:07:04.731 11292.357 - 11342.769: 97.4839% ( 8) 00:07:04.731 11342.769 - 11393.182: 97.5184% ( 6) 00:07:04.731 11393.182 - 11443.594: 97.5586% ( 7) 00:07:04.731 11443.594 - 11494.006: 97.6275% ( 12) 00:07:04.731 11494.006 - 11544.418: 97.6850% ( 10) 00:07:04.731 11544.418 - 11594.831: 97.7367% ( 9) 00:07:04.731 11594.831 - 11645.243: 97.7711% ( 6) 00:07:04.731 11645.243 - 11695.655: 97.8056% ( 6) 00:07:04.731 11695.655 - 11746.068: 97.8458% ( 7) 00:07:04.731 11746.068 - 11796.480: 97.9033% ( 10) 00:07:04.731 11796.480 - 11846.892: 97.9665% ( 11) 00:07:04.731 11846.892 - 11897.305: 98.0239% ( 10) 00:07:04.731 11897.305 - 11947.717: 98.0871% ( 11) 00:07:04.731 11947.717 - 11998.129: 98.1445% ( 10) 00:07:04.731 11998.129 - 12048.542: 98.2135% ( 12) 00:07:04.731 12048.542 - 12098.954: 98.2824% ( 12) 00:07:04.731 12098.954 - 12149.366: 98.3456% ( 11) 00:07:04.731 12149.366 - 12199.778: 98.3743% ( 5) 00:07:04.731 12199.778 - 12250.191: 98.4088% ( 6) 00:07:04.731 12250.191 - 12300.603: 98.4375% ( 5) 00:07:04.731 12300.603 - 12351.015: 98.4662% ( 5) 00:07:04.731 12351.015 - 12401.428: 98.4835% ( 3) 00:07:04.731 12401.428 - 12451.840: 98.4949% ( 2) 00:07:04.731 12451.840 - 12502.252: 98.5007% ( 1) 00:07:04.731 12502.252 - 12552.665: 98.5122% ( 2) 00:07:04.731 12552.665 - 12603.077: 98.5179% ( 1) 00:07:04.731 12603.077 - 12653.489: 98.5294% ( 2) 00:07:04.731 12653.489 - 12703.902: 98.5352% ( 1) 00:07:04.731 12703.902 - 12754.314: 98.5466% ( 2) 00:07:04.731 12754.314 - 12804.726: 98.5581% ( 2) 00:07:04.731 12804.726 - 12855.138: 98.5696% ( 2) 00:07:04.731 12855.138 - 12905.551: 98.5811% ( 2) 00:07:04.731 12905.551 - 13006.375: 98.6041% ( 4) 00:07:04.731 13006.375 - 13107.200: 98.6271% ( 4) 00:07:04.731 13107.200 - 13208.025: 98.6500% ( 4) 00:07:04.731 13208.025 - 13308.849: 98.6730% ( 4) 00:07:04.731 13308.849 - 13409.674: 98.6960% ( 4) 00:07:04.731 13409.674 - 13510.498: 98.7190% ( 4) 00:07:04.731 13510.498 - 13611.323: 98.7477% ( 5) 00:07:04.731 13611.323 - 13712.148: 98.7937% ( 8) 00:07:04.731 13712.148 - 13812.972: 98.8396% ( 8) 00:07:04.731 13812.972 - 13913.797: 98.8798% ( 7) 00:07:04.731 13913.797 - 14014.622: 98.9200% ( 7) 00:07:04.731 14014.622 - 14115.446: 98.9660% ( 8) 00:07:04.731 14115.446 - 14216.271: 99.0119% ( 8) 00:07:04.731 14216.271 - 14317.095: 99.0522% ( 7) 00:07:04.731 14317.095 - 14417.920: 99.0751% ( 4) 00:07:04.731 14417.920 - 14518.745: 99.1096% ( 6) 00:07:04.731 14518.745 - 14619.569: 99.1498% ( 7) 00:07:04.731 14619.569 - 14720.394: 99.1900% ( 7) 00:07:04.731 14720.394 - 14821.218: 99.2360% ( 8) 00:07:04.731 14821.218 - 14922.043: 99.2647% ( 5) 00:07:04.731 21072.345 - 21173.169: 99.2705% ( 1) 00:07:04.731 21173.169 - 21273.994: 99.2934% ( 4) 00:07:04.731 21273.994 - 21374.818: 99.3164% ( 4) 00:07:04.731 21374.818 - 21475.643: 99.3394% ( 4) 00:07:04.731 21475.643 - 21576.468: 99.3624% ( 4) 00:07:04.731 21576.468 - 21677.292: 99.3853% ( 4) 00:07:04.731 21677.292 - 21778.117: 99.4083% ( 4) 00:07:04.731 21778.117 - 21878.942: 99.4313% ( 4) 00:07:04.731 21878.942 - 21979.766: 99.4543% ( 4) 00:07:04.731 21979.766 - 22080.591: 99.4830% ( 5) 00:07:04.731 22080.591 - 22181.415: 99.5002% ( 3) 00:07:04.731 22181.415 - 22282.240: 99.5290% ( 5) 00:07:04.731 22282.240 - 22383.065: 99.5519% ( 4) 00:07:04.731 22383.065 - 22483.889: 99.5749% ( 4) 00:07:04.731 22483.889 - 22584.714: 99.6036% ( 5) 00:07:04.731 22584.714 - 22685.538: 99.6266% ( 4) 00:07:04.731 22685.538 - 22786.363: 99.6324% ( 1) 00:07:04.731 26012.751 - 26214.400: 99.6668% ( 6) 00:07:04.731 26214.400 - 26416.049: 99.7185% ( 9) 00:07:04.732 26416.049 - 26617.698: 99.7645% ( 8) 00:07:04.732 26617.698 - 26819.348: 99.8104% ( 8) 00:07:04.732 26819.348 - 27020.997: 99.8564% ( 8) 00:07:04.732 27020.997 - 27222.646: 99.9023% ( 8) 00:07:04.732 27222.646 - 27424.295: 99.9483% ( 8) 00:07:04.732 27424.295 - 27625.945: 99.9943% ( 8) 00:07:04.732 27625.945 - 27827.594: 100.0000% ( 1) 00:07:04.732 00:07:04.732 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:04.732 ============================================================================== 00:07:04.732 Range in us Cumulative IO count 00:07:04.732 5772.209 - 5797.415: 0.0115% ( 2) 00:07:04.732 5797.415 - 5822.622: 0.0345% ( 4) 00:07:04.732 5822.622 - 5847.828: 0.0574% ( 4) 00:07:04.732 5847.828 - 5873.034: 0.0919% ( 6) 00:07:04.732 5873.034 - 5898.240: 0.1551% ( 11) 00:07:04.732 5898.240 - 5923.446: 0.2240% ( 12) 00:07:04.732 5923.446 - 5948.652: 0.3159% ( 16) 00:07:04.732 5948.652 - 5973.858: 0.4481% ( 23) 00:07:04.732 5973.858 - 5999.065: 0.6147% ( 29) 00:07:04.732 5999.065 - 6024.271: 0.7410% ( 22) 00:07:04.732 6024.271 - 6049.477: 0.8674% ( 22) 00:07:04.732 6049.477 - 6074.683: 1.0455% ( 31) 00:07:04.732 6074.683 - 6099.889: 1.1719% ( 22) 00:07:04.732 6099.889 - 6125.095: 1.2868% ( 20) 00:07:04.732 6125.095 - 6150.302: 1.4419% ( 27) 00:07:04.732 6150.302 - 6175.508: 1.8038% ( 63) 00:07:04.732 6175.508 - 6200.714: 2.0565% ( 44) 00:07:04.732 6200.714 - 6225.920: 2.3552% ( 52) 00:07:04.732 6225.920 - 6251.126: 2.6540% ( 52) 00:07:04.732 6251.126 - 6276.332: 3.0503% ( 69) 00:07:04.732 6276.332 - 6301.538: 3.3375% ( 50) 00:07:04.732 6301.538 - 6326.745: 3.6994% ( 63) 00:07:04.732 6326.745 - 6351.951: 4.1992% ( 87) 00:07:04.732 6351.951 - 6377.157: 4.6243% ( 74) 00:07:04.732 6377.157 - 6402.363: 5.1758% ( 96) 00:07:04.732 6402.363 - 6427.569: 5.9972% ( 143) 00:07:04.732 6427.569 - 6452.775: 7.0887% ( 190) 00:07:04.732 6452.775 - 6503.188: 9.4152% ( 405) 00:07:04.732 6503.188 - 6553.600: 12.8274% ( 594) 00:07:04.732 6553.600 - 6604.012: 17.4230% ( 800) 00:07:04.732 6604.012 - 6654.425: 23.2594% ( 1016) 00:07:04.732 6654.425 - 6704.837: 28.8660% ( 976) 00:07:04.732 6704.837 - 6755.249: 34.1797% ( 925) 00:07:04.732 6755.249 - 6805.662: 40.9582% ( 1180) 00:07:04.732 6805.662 - 6856.074: 46.4671% ( 959) 00:07:04.732 6856.074 - 6906.486: 51.1661% ( 818) 00:07:04.732 6906.486 - 6956.898: 55.5377% ( 761) 00:07:04.732 6956.898 - 7007.311: 59.1739% ( 633) 00:07:04.732 7007.311 - 7057.723: 62.9366% ( 655) 00:07:04.732 7057.723 - 7108.135: 66.1650% ( 562) 00:07:04.732 7108.135 - 7158.548: 68.5604% ( 417) 00:07:04.732 7158.548 - 7208.960: 70.0885% ( 266) 00:07:04.732 7208.960 - 7259.372: 71.5878% ( 261) 00:07:04.732 7259.372 - 7309.785: 73.4432% ( 323) 00:07:04.732 7309.785 - 7360.197: 75.1666% ( 300) 00:07:04.732 7360.197 - 7410.609: 76.4763% ( 228) 00:07:04.732 7410.609 - 7461.022: 77.6597% ( 206) 00:07:04.732 7461.022 - 7511.434: 78.8488% ( 207) 00:07:04.732 7511.434 - 7561.846: 80.0896% ( 216) 00:07:04.732 7561.846 - 7612.258: 81.1926% ( 192) 00:07:04.732 7612.258 - 7662.671: 82.1232% ( 162) 00:07:04.732 7662.671 - 7713.083: 83.0193% ( 156) 00:07:04.732 7713.083 - 7763.495: 83.9786% ( 167) 00:07:04.732 7763.495 - 7813.908: 84.8116% ( 145) 00:07:04.732 7813.908 - 7864.320: 85.6388% ( 144) 00:07:04.732 7864.320 - 7914.732: 86.3339% ( 121) 00:07:04.732 7914.732 - 7965.145: 86.9256% ( 103) 00:07:04.732 7965.145 - 8015.557: 87.3621% ( 76) 00:07:04.732 8015.557 - 8065.969: 87.7413% ( 66) 00:07:04.732 8065.969 - 8116.382: 88.3789% ( 111) 00:07:04.732 8116.382 - 8166.794: 88.8212% ( 77) 00:07:04.732 8166.794 - 8217.206: 89.2233% ( 70) 00:07:04.732 8217.206 - 8267.618: 89.8265% ( 105) 00:07:04.732 8267.618 - 8318.031: 90.2976% ( 82) 00:07:04.732 8318.031 - 8368.443: 90.5561% ( 45) 00:07:04.732 8368.443 - 8418.855: 90.8261% ( 47) 00:07:04.732 8418.855 - 8469.268: 91.2339% ( 71) 00:07:04.732 8469.268 - 8519.680: 91.4350% ( 35) 00:07:04.732 8519.680 - 8570.092: 91.7165% ( 49) 00:07:04.732 8570.092 - 8620.505: 91.8658% ( 26) 00:07:04.732 8620.505 - 8670.917: 91.9922% ( 22) 00:07:04.732 8670.917 - 8721.329: 92.1301% ( 24) 00:07:04.732 8721.329 - 8771.742: 92.2564% ( 22) 00:07:04.732 8771.742 - 8822.154: 92.3886% ( 23) 00:07:04.732 8822.154 - 8872.566: 92.4690% ( 14) 00:07:04.732 8872.566 - 8922.978: 92.6413% ( 30) 00:07:04.732 8922.978 - 8973.391: 92.8596% ( 38) 00:07:04.732 8973.391 - 9023.803: 93.0032% ( 25) 00:07:04.732 9023.803 - 9074.215: 93.1641% ( 28) 00:07:04.732 9074.215 - 9124.628: 93.3709% ( 36) 00:07:04.732 9124.628 - 9175.040: 93.7385% ( 64) 00:07:04.732 9175.040 - 9225.452: 93.9511% ( 37) 00:07:04.732 9225.452 - 9275.865: 94.2498% ( 52) 00:07:04.732 9275.865 - 9326.277: 94.5025% ( 44) 00:07:04.732 9326.277 - 9376.689: 94.6117% ( 19) 00:07:04.732 9376.689 - 9427.102: 94.7208% ( 19) 00:07:04.732 9427.102 - 9477.514: 94.8070% ( 15) 00:07:04.732 9477.514 - 9527.926: 94.8587% ( 9) 00:07:04.732 9527.926 - 9578.338: 94.9161% ( 10) 00:07:04.732 9578.338 - 9628.751: 95.1574% ( 42) 00:07:04.732 9628.751 - 9679.163: 95.2780% ( 21) 00:07:04.732 9679.163 - 9729.575: 95.3929% ( 20) 00:07:04.732 9729.575 - 9779.988: 95.4446% ( 9) 00:07:04.732 9779.988 - 9830.400: 95.4848% ( 7) 00:07:04.732 9830.400 - 9880.812: 95.5365% ( 9) 00:07:04.732 9880.812 - 9931.225: 95.5882% ( 9) 00:07:04.732 9931.225 - 9981.637: 95.6284% ( 7) 00:07:04.732 9981.637 - 10032.049: 95.6687% ( 7) 00:07:04.732 10032.049 - 10082.462: 95.6974% ( 5) 00:07:04.732 10082.462 - 10132.874: 95.7376% ( 7) 00:07:04.732 10132.874 - 10183.286: 95.7721% ( 6) 00:07:04.732 10183.286 - 10233.698: 95.8238% ( 9) 00:07:04.732 10233.698 - 10284.111: 95.9157% ( 16) 00:07:04.732 10284.111 - 10334.523: 95.9616% ( 8) 00:07:04.732 10334.523 - 10384.935: 96.0018% ( 7) 00:07:04.732 10384.935 - 10435.348: 96.0593% ( 10) 00:07:04.732 10435.348 - 10485.760: 96.1627% ( 18) 00:07:04.732 10485.760 - 10536.172: 96.2833% ( 21) 00:07:04.732 10536.172 - 10586.585: 96.4154% ( 23) 00:07:04.732 10586.585 - 10636.997: 96.7371% ( 56) 00:07:04.732 10636.997 - 10687.409: 96.8348% ( 17) 00:07:04.732 10687.409 - 10737.822: 96.9095% ( 13) 00:07:04.732 10737.822 - 10788.234: 96.9784% ( 12) 00:07:04.732 10788.234 - 10838.646: 97.0416% ( 11) 00:07:04.732 10838.646 - 10889.058: 97.1048% ( 11) 00:07:04.732 10889.058 - 10939.471: 97.1565% ( 9) 00:07:04.732 10939.471 - 10989.883: 97.2024% ( 8) 00:07:04.732 10989.883 - 11040.295: 97.2541% ( 9) 00:07:04.732 11040.295 - 11090.708: 97.3058% ( 9) 00:07:04.732 11090.708 - 11141.120: 97.3575% ( 9) 00:07:04.732 11141.120 - 11191.532: 97.4035% ( 8) 00:07:04.732 11191.532 - 11241.945: 97.4667% ( 11) 00:07:04.732 11241.945 - 11292.357: 97.5586% ( 16) 00:07:04.732 11292.357 - 11342.769: 97.6677% ( 19) 00:07:04.732 11342.769 - 11393.182: 97.7654% ( 17) 00:07:04.732 11393.182 - 11443.594: 97.8114% ( 8) 00:07:04.732 11443.594 - 11494.006: 97.8573% ( 8) 00:07:04.732 11494.006 - 11544.418: 97.9033% ( 8) 00:07:04.732 11544.418 - 11594.831: 97.9320% ( 5) 00:07:04.732 11594.831 - 11645.243: 97.9550% ( 4) 00:07:04.732 11645.243 - 11695.655: 97.9722% ( 3) 00:07:04.732 11695.655 - 11746.068: 97.9952% ( 4) 00:07:04.732 11746.068 - 11796.480: 98.0182% ( 4) 00:07:04.732 11796.480 - 11846.892: 98.0411% ( 4) 00:07:04.732 11846.892 - 11897.305: 98.0928% ( 9) 00:07:04.732 11897.305 - 11947.717: 98.1388% ( 8) 00:07:04.732 11947.717 - 11998.129: 98.1905% ( 9) 00:07:04.732 11998.129 - 12048.542: 98.2422% ( 9) 00:07:04.732 12048.542 - 12098.954: 98.2939% ( 9) 00:07:04.732 12098.954 - 12149.366: 98.4088% ( 20) 00:07:04.732 12149.366 - 12199.778: 98.4547% ( 8) 00:07:04.732 12199.778 - 12250.191: 98.4720% ( 3) 00:07:04.732 12250.191 - 12300.603: 98.4892% ( 3) 00:07:04.733 12300.603 - 12351.015: 98.5007% ( 2) 00:07:04.733 12351.015 - 12401.428: 98.5179% ( 3) 00:07:04.733 12401.428 - 12451.840: 98.5294% ( 2) 00:07:04.733 12905.551 - 13006.375: 98.5466% ( 3) 00:07:04.733 13006.375 - 13107.200: 98.5696% ( 4) 00:07:04.733 13107.200 - 13208.025: 98.5926% ( 4) 00:07:04.733 13208.025 - 13308.849: 98.6098% ( 3) 00:07:04.733 13308.849 - 13409.674: 98.6328% ( 4) 00:07:04.733 13409.674 - 13510.498: 98.6558% ( 4) 00:07:04.733 13510.498 - 13611.323: 98.6788% ( 4) 00:07:04.733 13611.323 - 13712.148: 98.7592% ( 14) 00:07:04.733 13712.148 - 13812.972: 98.8051% ( 8) 00:07:04.733 13812.972 - 13913.797: 98.8683% ( 11) 00:07:04.733 13913.797 - 14014.622: 98.9258% ( 10) 00:07:04.733 14014.622 - 14115.446: 98.9890% ( 11) 00:07:04.733 14115.446 - 14216.271: 99.0464% ( 10) 00:07:04.733 14216.271 - 14317.095: 99.1211% ( 13) 00:07:04.733 14317.095 - 14417.920: 99.1843% ( 11) 00:07:04.733 14417.920 - 14518.745: 99.2475% ( 11) 00:07:04.733 14518.745 - 14619.569: 99.2647% ( 3) 00:07:04.733 19559.975 - 19660.800: 99.2934% ( 5) 00:07:04.733 19660.800 - 19761.625: 99.3164% ( 4) 00:07:04.733 19761.625 - 19862.449: 99.3336% ( 3) 00:07:04.733 19862.449 - 19963.274: 99.3566% ( 4) 00:07:04.733 19963.274 - 20064.098: 99.3853% ( 5) 00:07:04.733 20064.098 - 20164.923: 99.4083% ( 4) 00:07:04.733 20164.923 - 20265.748: 99.4313% ( 4) 00:07:04.733 20265.748 - 20366.572: 99.4543% ( 4) 00:07:04.733 20366.572 - 20467.397: 99.4830% ( 5) 00:07:04.733 20467.397 - 20568.222: 99.5060% ( 4) 00:07:04.733 20568.222 - 20669.046: 99.5290% ( 4) 00:07:04.733 20669.046 - 20769.871: 99.5519% ( 4) 00:07:04.733 20769.871 - 20870.695: 99.5749% ( 4) 00:07:04.733 20870.695 - 20971.520: 99.5979% ( 4) 00:07:04.733 20971.520 - 21072.345: 99.6266% ( 5) 00:07:04.733 21072.345 - 21173.169: 99.6324% ( 1) 00:07:04.733 24298.732 - 24399.557: 99.6496% ( 3) 00:07:04.733 24399.557 - 24500.382: 99.6668% ( 3) 00:07:04.733 24500.382 - 24601.206: 99.6898% ( 4) 00:07:04.733 24601.206 - 24702.031: 99.7128% ( 4) 00:07:04.733 24702.031 - 24802.855: 99.7358% ( 4) 00:07:04.733 24802.855 - 24903.680: 99.7587% ( 4) 00:07:04.733 24903.680 - 25004.505: 99.7875% ( 5) 00:07:04.733 25004.505 - 25105.329: 99.8104% ( 4) 00:07:04.733 25105.329 - 25206.154: 99.8334% ( 4) 00:07:04.733 25206.154 - 25306.978: 99.8564% ( 4) 00:07:04.733 25306.978 - 25407.803: 99.8851% ( 5) 00:07:04.733 25407.803 - 25508.628: 99.9023% ( 3) 00:07:04.733 25508.628 - 25609.452: 99.9311% ( 5) 00:07:04.733 25609.452 - 25710.277: 99.9540% ( 4) 00:07:04.733 25710.277 - 25811.102: 99.9770% ( 4) 00:07:04.733 25811.102 - 26012.751: 100.0000% ( 4) 00:07:04.733 00:07:04.733 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:04.733 ============================================================================== 00:07:04.733 Range in us Cumulative IO count 00:07:04.733 5646.178 - 5671.385: 0.0057% ( 1) 00:07:04.733 5822.622 - 5847.828: 0.0229% ( 3) 00:07:04.733 5847.828 - 5873.034: 0.0401% ( 3) 00:07:04.733 5873.034 - 5898.240: 0.0859% ( 8) 00:07:04.733 5898.240 - 5923.446: 0.1259% ( 7) 00:07:04.733 5923.446 - 5948.652: 0.1774% ( 9) 00:07:04.733 5948.652 - 5973.858: 0.2747% ( 17) 00:07:04.733 5973.858 - 5999.065: 0.4693% ( 34) 00:07:04.733 5999.065 - 6024.271: 0.7612% ( 51) 00:07:04.733 6024.271 - 6049.477: 0.9158% ( 27) 00:07:04.733 6049.477 - 6074.683: 1.2649% ( 61) 00:07:04.733 6074.683 - 6099.889: 1.3965% ( 23) 00:07:04.733 6099.889 - 6125.095: 1.6312% ( 41) 00:07:04.733 6125.095 - 6150.302: 2.0433% ( 72) 00:07:04.733 6150.302 - 6175.508: 2.2321% ( 33) 00:07:04.733 6175.508 - 6200.714: 2.4038% ( 30) 00:07:04.733 6200.714 - 6225.920: 2.6213% ( 38) 00:07:04.733 6225.920 - 6251.126: 3.0563% ( 76) 00:07:04.733 6251.126 - 6276.332: 3.3482% ( 51) 00:07:04.733 6276.332 - 6301.538: 3.5772% ( 40) 00:07:04.733 6301.538 - 6326.745: 3.8233% ( 43) 00:07:04.733 6326.745 - 6351.951: 4.1323% ( 54) 00:07:04.733 6351.951 - 6377.157: 4.5272% ( 69) 00:07:04.733 6377.157 - 6402.363: 5.1110% ( 102) 00:07:04.733 6402.363 - 6427.569: 5.6605% ( 96) 00:07:04.733 6427.569 - 6452.775: 6.4389% ( 136) 00:07:04.733 6452.775 - 6503.188: 8.7740% ( 408) 00:07:04.733 6503.188 - 6553.600: 12.4313% ( 639) 00:07:04.733 6553.600 - 6604.012: 17.8171% ( 941) 00:07:04.733 6604.012 - 6654.425: 23.6664% ( 1022) 00:07:04.733 6654.425 - 6704.837: 29.1781% ( 963) 00:07:04.733 6704.837 - 6755.249: 35.2221% ( 1056) 00:07:04.733 6755.249 - 6805.662: 41.2489% ( 1053) 00:07:04.733 6805.662 - 6856.074: 46.5144% ( 920) 00:07:04.733 6856.074 - 6906.486: 50.9730% ( 779) 00:07:04.733 6906.486 - 6956.898: 55.2885% ( 754) 00:07:04.733 6956.898 - 7007.311: 59.2605% ( 694) 00:07:04.733 7007.311 - 7057.723: 62.2253% ( 518) 00:07:04.733 7057.723 - 7108.135: 65.2873% ( 535) 00:07:04.733 7108.135 - 7158.548: 67.3191% ( 355) 00:07:04.733 7158.548 - 7208.960: 69.5685% ( 393) 00:07:04.733 7208.960 - 7259.372: 71.5144% ( 340) 00:07:04.733 7259.372 - 7309.785: 72.9396% ( 249) 00:07:04.733 7309.785 - 7360.197: 74.2846% ( 235) 00:07:04.733 7360.197 - 7410.609: 76.0245% ( 304) 00:07:04.733 7410.609 - 7461.022: 77.3752% ( 236) 00:07:04.733 7461.022 - 7511.434: 78.5657% ( 208) 00:07:04.733 7511.434 - 7561.846: 79.6875% ( 196) 00:07:04.733 7561.846 - 7612.258: 80.5918% ( 158) 00:07:04.733 7612.258 - 7662.671: 81.8681% ( 223) 00:07:04.733 7662.671 - 7713.083: 82.5607% ( 121) 00:07:04.733 7713.083 - 7763.495: 83.4020% ( 147) 00:07:04.733 7763.495 - 7813.908: 84.2319% ( 145) 00:07:04.733 7813.908 - 7864.320: 85.0733% ( 147) 00:07:04.733 7864.320 - 7914.732: 85.8345% ( 133) 00:07:04.733 7914.732 - 7965.145: 86.7102% ( 153) 00:07:04.733 7965.145 - 8015.557: 87.5172% ( 141) 00:07:04.733 8015.557 - 8065.969: 88.2154% ( 122) 00:07:04.733 8065.969 - 8116.382: 88.6218% ( 71) 00:07:04.733 8116.382 - 8166.794: 89.1827% ( 98) 00:07:04.733 8166.794 - 8217.206: 89.4975% ( 55) 00:07:04.733 8217.206 - 8267.618: 89.8008% ( 53) 00:07:04.733 8267.618 - 8318.031: 90.0927% ( 51) 00:07:04.733 8318.031 - 8368.443: 90.4991% ( 71) 00:07:04.733 8368.443 - 8418.855: 90.9054% ( 71) 00:07:04.733 8418.855 - 8469.268: 91.1287% ( 39) 00:07:04.733 8469.268 - 8519.680: 91.2603% ( 23) 00:07:04.733 8519.680 - 8570.092: 91.5408% ( 49) 00:07:04.733 8570.092 - 8620.505: 91.8098% ( 47) 00:07:04.733 8620.505 - 8670.917: 91.9700% ( 28) 00:07:04.733 8670.917 - 8721.329: 92.1474% ( 31) 00:07:04.733 8721.329 - 8771.742: 92.2791% ( 23) 00:07:04.733 8771.742 - 8822.154: 92.3935% ( 20) 00:07:04.733 8822.154 - 8872.566: 92.4622% ( 12) 00:07:04.733 8872.566 - 8922.978: 92.5309% ( 12) 00:07:04.733 8922.978 - 8973.391: 92.5824% ( 9) 00:07:04.733 8973.391 - 9023.803: 92.6339% ( 9) 00:07:04.733 9023.803 - 9074.215: 92.6854% ( 9) 00:07:04.733 9074.215 - 9124.628: 92.7656% ( 14) 00:07:04.733 9124.628 - 9175.040: 92.9773% ( 37) 00:07:04.733 9175.040 - 9225.452: 93.1834% ( 36) 00:07:04.733 9225.452 - 9275.865: 93.3265% ( 25) 00:07:04.733 9275.865 - 9326.277: 93.5955% ( 47) 00:07:04.733 9326.277 - 9376.689: 93.8301% ( 41) 00:07:04.733 9376.689 - 9427.102: 93.9961% ( 29) 00:07:04.733 9427.102 - 9477.514: 94.2823% ( 50) 00:07:04.733 9477.514 - 9527.926: 94.6200% ( 59) 00:07:04.733 9527.926 - 9578.338: 94.8947% ( 48) 00:07:04.733 9578.338 - 9628.751: 95.1122% ( 38) 00:07:04.733 9628.751 - 9679.163: 95.2839% ( 30) 00:07:04.733 9679.163 - 9729.575: 95.4155% ( 23) 00:07:04.733 9729.575 - 9779.988: 95.4899% ( 13) 00:07:04.733 9779.988 - 9830.400: 95.5586% ( 12) 00:07:04.733 9830.400 - 9880.812: 95.6216% ( 11) 00:07:04.733 9880.812 - 9931.225: 95.6902% ( 12) 00:07:04.734 9931.225 - 9981.637: 95.7875% ( 17) 00:07:04.734 9981.637 - 10032.049: 95.8448% ( 10) 00:07:04.734 10032.049 - 10082.462: 95.9478% ( 18) 00:07:04.734 10082.462 - 10132.874: 96.0623% ( 20) 00:07:04.734 10132.874 - 10183.286: 96.1653% ( 18) 00:07:04.734 10183.286 - 10233.698: 96.2168% ( 9) 00:07:04.734 10233.698 - 10284.111: 96.2511% ( 6) 00:07:04.734 10284.111 - 10334.523: 96.2855% ( 6) 00:07:04.734 10334.523 - 10384.935: 96.3313% ( 8) 00:07:04.734 10384.935 - 10435.348: 96.3828% ( 9) 00:07:04.734 10435.348 - 10485.760: 96.4515% ( 12) 00:07:04.734 10485.760 - 10536.172: 96.6117% ( 28) 00:07:04.734 10536.172 - 10586.585: 96.6918% ( 14) 00:07:04.734 10586.585 - 10636.997: 96.7605% ( 12) 00:07:04.734 10636.997 - 10687.409: 96.8178% ( 10) 00:07:04.734 10687.409 - 10737.822: 96.8979% ( 14) 00:07:04.734 10737.822 - 10788.234: 97.0238% ( 22) 00:07:04.734 10788.234 - 10838.646: 97.1097% ( 15) 00:07:04.734 10838.646 - 10889.058: 97.1841% ( 13) 00:07:04.734 10889.058 - 10939.471: 97.2585% ( 13) 00:07:04.734 10939.471 - 10989.883: 97.3329% ( 13) 00:07:04.734 10989.883 - 11040.295: 97.3901% ( 10) 00:07:04.734 11040.295 - 11090.708: 97.4473% ( 10) 00:07:04.734 11090.708 - 11141.120: 97.5103% ( 11) 00:07:04.734 11141.120 - 11191.532: 97.5618% ( 9) 00:07:04.734 11191.532 - 11241.945: 97.6076% ( 8) 00:07:04.734 11241.945 - 11292.357: 97.6534% ( 8) 00:07:04.734 11292.357 - 11342.769: 97.6877% ( 6) 00:07:04.734 11342.769 - 11393.182: 97.7163% ( 5) 00:07:04.734 11393.182 - 11443.594: 97.7564% ( 7) 00:07:04.734 11443.594 - 11494.006: 97.8022% ( 8) 00:07:04.734 11494.006 - 11544.418: 97.8537% ( 9) 00:07:04.734 11544.418 - 11594.831: 97.9109% ( 10) 00:07:04.734 11594.831 - 11645.243: 97.9281% ( 3) 00:07:04.734 11645.243 - 11695.655: 97.9396% ( 2) 00:07:04.734 11695.655 - 11746.068: 97.9510% ( 2) 00:07:04.734 11746.068 - 11796.480: 97.9682% ( 3) 00:07:04.734 11796.480 - 11846.892: 98.0082% ( 7) 00:07:04.734 11846.892 - 11897.305: 98.0598% ( 9) 00:07:04.734 11897.305 - 11947.717: 98.1170% ( 10) 00:07:04.734 11947.717 - 11998.129: 98.1742% ( 10) 00:07:04.734 11998.129 - 12048.542: 98.2257% ( 9) 00:07:04.734 12048.542 - 12098.954: 98.2887% ( 11) 00:07:04.734 12098.954 - 12149.366: 98.3230% ( 6) 00:07:04.734 12149.366 - 12199.778: 98.3574% ( 6) 00:07:04.734 12199.778 - 12250.191: 98.3917% ( 6) 00:07:04.734 12250.191 - 12300.603: 98.4203% ( 5) 00:07:04.734 12300.603 - 12351.015: 98.4375% ( 3) 00:07:04.734 12351.015 - 12401.428: 98.4547% ( 3) 00:07:04.734 12401.428 - 12451.840: 98.4718% ( 3) 00:07:04.734 12451.840 - 12502.252: 98.4890% ( 3) 00:07:04.734 12502.252 - 12552.665: 98.5119% ( 4) 00:07:04.734 12552.665 - 12603.077: 98.5348% ( 4) 00:07:04.734 13107.200 - 13208.025: 98.5577% ( 4) 00:07:04.734 13208.025 - 13308.849: 98.5978% ( 7) 00:07:04.734 13308.849 - 13409.674: 98.6779% ( 14) 00:07:04.734 13409.674 - 13510.498: 98.7752% ( 17) 00:07:04.734 13510.498 - 13611.323: 98.8496% ( 13) 00:07:04.734 13611.323 - 13712.148: 98.9297% ( 14) 00:07:04.734 13712.148 - 13812.972: 99.0098% ( 14) 00:07:04.734 13812.972 - 13913.797: 99.0957% ( 15) 00:07:04.734 13913.797 - 14014.622: 99.1815% ( 15) 00:07:04.734 14014.622 - 14115.446: 99.2731% ( 16) 00:07:04.734 14115.446 - 14216.271: 99.3361% ( 11) 00:07:04.734 14216.271 - 14317.095: 99.3876% ( 9) 00:07:04.734 14317.095 - 14417.920: 99.4391% ( 9) 00:07:04.734 14417.920 - 14518.745: 99.4849% ( 8) 00:07:04.734 14518.745 - 14619.569: 99.5307% ( 8) 00:07:04.734 14619.569 - 14720.394: 99.5765% ( 8) 00:07:04.734 14720.394 - 14821.218: 99.6051% ( 5) 00:07:04.734 14821.218 - 14922.043: 99.6280% ( 4) 00:07:04.734 14922.043 - 15022.868: 99.6337% ( 1) 00:07:04.734 19055.852 - 19156.677: 99.6509% ( 3) 00:07:04.734 19156.677 - 19257.502: 99.6738% ( 4) 00:07:04.734 19257.502 - 19358.326: 99.6967% ( 4) 00:07:04.734 19358.326 - 19459.151: 99.7196% ( 4) 00:07:04.734 19459.151 - 19559.975: 99.7424% ( 4) 00:07:04.734 19559.975 - 19660.800: 99.7653% ( 4) 00:07:04.734 19660.800 - 19761.625: 99.7882% ( 4) 00:07:04.734 19761.625 - 19862.449: 99.8168% ( 5) 00:07:04.734 19862.449 - 19963.274: 99.8397% ( 4) 00:07:04.734 19963.274 - 20064.098: 99.8626% ( 4) 00:07:04.734 20064.098 - 20164.923: 99.8855% ( 4) 00:07:04.734 20164.923 - 20265.748: 99.9141% ( 5) 00:07:04.734 20265.748 - 20366.572: 99.9370% ( 4) 00:07:04.734 20366.572 - 20467.397: 99.9542% ( 3) 00:07:04.734 20467.397 - 20568.222: 99.9771% ( 4) 00:07:04.734 20568.222 - 20669.046: 100.0000% ( 4) 00:07:04.734 00:07:04.734 01:20:00 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:04.734 00:07:04.734 real 0m2.498s 00:07:04.734 user 0m2.205s 00:07:04.734 sys 0m0.194s 00:07:04.734 01:20:00 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.734 01:20:00 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.734 ************************************ 00:07:04.734 END TEST nvme_perf 00:07:04.734 ************************************ 00:07:04.734 01:20:00 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:04.734 01:20:00 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:04.734 01:20:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.734 01:20:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.734 ************************************ 00:07:04.734 START TEST nvme_hello_world 00:07:04.734 ************************************ 00:07:04.734 01:20:00 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:04.734 Initializing NVMe Controllers 00:07:04.734 Attached to 0000:00:10.0 00:07:04.734 Namespace ID: 1 size: 6GB 00:07:04.734 Attached to 0000:00:11.0 00:07:04.734 Namespace ID: 1 size: 5GB 00:07:04.734 Attached to 0000:00:13.0 00:07:04.734 Namespace ID: 1 size: 1GB 00:07:04.734 Attached to 0000:00:12.0 00:07:04.734 Namespace ID: 1 size: 4GB 00:07:04.734 Namespace ID: 2 size: 4GB 00:07:04.734 Namespace ID: 3 size: 4GB 00:07:04.734 Initialization complete. 00:07:04.734 INFO: using host memory buffer for IO 00:07:04.734 Hello world! 00:07:04.734 INFO: using host memory buffer for IO 00:07:04.734 Hello world! 00:07:04.734 INFO: using host memory buffer for IO 00:07:04.734 Hello world! 00:07:04.734 INFO: using host memory buffer for IO 00:07:04.734 Hello world! 00:07:04.734 INFO: using host memory buffer for IO 00:07:04.734 Hello world! 00:07:04.734 INFO: using host memory buffer for IO 00:07:04.734 Hello world! 00:07:04.734 00:07:04.734 real 0m0.202s 00:07:04.734 user 0m0.073s 00:07:04.734 sys 0m0.091s 00:07:04.734 01:20:00 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.734 01:20:00 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:04.734 ************************************ 00:07:04.734 END TEST nvme_hello_world 00:07:04.734 ************************************ 00:07:04.734 01:20:00 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:04.734 01:20:00 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.734 01:20:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.734 01:20:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.734 ************************************ 00:07:04.734 START TEST nvme_sgl 00:07:04.734 ************************************ 00:07:04.734 01:20:00 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:04.992 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:04.992 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:04.992 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:04.992 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:04.992 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:04.992 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:04.992 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:04.992 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:04.992 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:04.992 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:04.992 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:04.992 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:04.992 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:04.992 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:04.993 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:04.993 NVMe Readv/Writev Request test 00:07:04.993 Attached to 0000:00:10.0 00:07:04.993 Attached to 0000:00:11.0 00:07:04.993 Attached to 0000:00:13.0 00:07:04.993 Attached to 0000:00:12.0 00:07:04.993 0000:00:10.0: build_io_request_2 test passed 00:07:04.993 0000:00:10.0: build_io_request_4 test passed 00:07:04.993 0000:00:10.0: build_io_request_5 test passed 00:07:04.993 0000:00:10.0: build_io_request_6 test passed 00:07:04.993 0000:00:10.0: build_io_request_7 test passed 00:07:04.993 0000:00:10.0: build_io_request_10 test passed 00:07:04.993 0000:00:11.0: build_io_request_2 test passed 00:07:04.993 0000:00:11.0: build_io_request_4 test passed 00:07:04.993 0000:00:11.0: build_io_request_5 test passed 00:07:04.993 0000:00:11.0: build_io_request_6 test passed 00:07:04.993 0000:00:11.0: build_io_request_7 test passed 00:07:04.993 0000:00:11.0: build_io_request_10 test passed 00:07:04.993 Cleaning up... 00:07:04.993 00:07:04.993 real 0m0.270s 00:07:04.993 user 0m0.132s 00:07:04.993 sys 0m0.098s 00:07:04.993 01:20:00 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.993 01:20:00 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:04.993 ************************************ 00:07:04.993 END TEST nvme_sgl 00:07:04.993 ************************************ 00:07:04.993 01:20:00 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:04.993 01:20:00 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.993 01:20:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.993 01:20:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.993 ************************************ 00:07:04.993 START TEST nvme_e2edp 00:07:04.993 ************************************ 00:07:04.993 01:20:00 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:05.251 NVMe Write/Read with End-to-End data protection test 00:07:05.251 Attached to 0000:00:10.0 00:07:05.251 Attached to 0000:00:11.0 00:07:05.251 Attached to 0000:00:13.0 00:07:05.251 Attached to 0000:00:12.0 00:07:05.251 Cleaning up... 00:07:05.251 00:07:05.251 real 0m0.198s 00:07:05.251 user 0m0.069s 00:07:05.251 sys 0m0.087s 00:07:05.251 01:20:01 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.251 01:20:01 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:05.251 ************************************ 00:07:05.251 END TEST nvme_e2edp 00:07:05.251 ************************************ 00:07:05.251 01:20:01 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:05.251 01:20:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.251 01:20:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.251 01:20:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.251 ************************************ 00:07:05.251 START TEST nvme_reserve 00:07:05.251 ************************************ 00:07:05.251 01:20:01 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:05.510 ===================================================== 00:07:05.510 NVMe Controller at PCI bus 0, device 16, function 0 00:07:05.510 ===================================================== 00:07:05.510 Reservations: Not Supported 00:07:05.510 ===================================================== 00:07:05.510 NVMe Controller at PCI bus 0, device 17, function 0 00:07:05.510 ===================================================== 00:07:05.510 Reservations: Not Supported 00:07:05.510 ===================================================== 00:07:05.510 NVMe Controller at PCI bus 0, device 19, function 0 00:07:05.510 ===================================================== 00:07:05.510 Reservations: Not Supported 00:07:05.510 ===================================================== 00:07:05.510 NVMe Controller at PCI bus 0, device 18, function 0 00:07:05.510 ===================================================== 00:07:05.510 Reservations: Not Supported 00:07:05.510 Reservation test passed 00:07:05.510 00:07:05.510 real 0m0.187s 00:07:05.510 user 0m0.063s 00:07:05.510 sys 0m0.085s 00:07:05.510 01:20:01 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.510 01:20:01 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:05.510 ************************************ 00:07:05.510 END TEST nvme_reserve 00:07:05.510 ************************************ 00:07:05.510 01:20:01 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:05.510 01:20:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.510 01:20:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.510 01:20:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.510 ************************************ 00:07:05.510 START TEST nvme_err_injection 00:07:05.510 ************************************ 00:07:05.510 01:20:01 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:05.769 NVMe Error Injection test 00:07:05.769 Attached to 0000:00:10.0 00:07:05.769 Attached to 0000:00:11.0 00:07:05.769 Attached to 0000:00:13.0 00:07:05.769 Attached to 0000:00:12.0 00:07:05.769 0000:00:10.0: get features failed as expected 00:07:05.769 0000:00:11.0: get features failed as expected 00:07:05.769 0000:00:13.0: get features failed as expected 00:07:05.769 0000:00:12.0: get features failed as expected 00:07:05.769 0000:00:10.0: get features successfully as expected 00:07:05.769 0000:00:11.0: get features successfully as expected 00:07:05.769 0000:00:13.0: get features successfully as expected 00:07:05.769 0000:00:12.0: get features successfully as expected 00:07:05.769 0000:00:10.0: read failed as expected 00:07:05.769 0000:00:11.0: read failed as expected 00:07:05.769 0000:00:13.0: read failed as expected 00:07:05.769 0000:00:12.0: read failed as expected 00:07:05.769 0000:00:12.0: read successfully as expected 00:07:05.769 0000:00:10.0: read successfully as expected 00:07:05.769 0000:00:11.0: read successfully as expected 00:07:05.769 0000:00:13.0: read successfully as expected 00:07:05.769 Cleaning up... 00:07:05.769 00:07:05.769 real 0m0.225s 00:07:05.769 user 0m0.084s 00:07:05.769 sys 0m0.093s 00:07:05.769 01:20:01 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.769 01:20:01 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:05.769 ************************************ 00:07:05.769 END TEST nvme_err_injection 00:07:05.769 ************************************ 00:07:05.769 01:20:01 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:05.769 01:20:01 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:05.769 01:20:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.769 01:20:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.769 ************************************ 00:07:05.769 START TEST nvme_overhead 00:07:05.769 ************************************ 00:07:05.769 01:20:01 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:07.141 Initializing NVMe Controllers 00:07:07.141 Attached to 0000:00:10.0 00:07:07.141 Attached to 0000:00:11.0 00:07:07.141 Attached to 0000:00:13.0 00:07:07.141 Attached to 0000:00:12.0 00:07:07.141 Initialization complete. Launching workers. 00:07:07.141 submit (in ns) avg, min, max = 11192.3, 9751.5, 205676.2 00:07:07.141 complete (in ns) avg, min, max = 7530.8, 7197.7, 194034.6 00:07:07.141 00:07:07.141 Submit histogram 00:07:07.141 ================ 00:07:07.141 Range in us Cumulative Count 00:07:07.141 9.748 - 9.797: 0.0056% ( 1) 00:07:07.141 9.895 - 9.945: 0.0111% ( 1) 00:07:07.141 10.289 - 10.338: 0.0167% ( 1) 00:07:07.141 10.388 - 10.437: 0.0222% ( 1) 00:07:07.141 10.437 - 10.486: 0.0278% ( 1) 00:07:07.141 10.486 - 10.535: 0.0333% ( 1) 00:07:07.141 10.535 - 10.585: 0.0389% ( 1) 00:07:07.141 10.634 - 10.683: 0.1610% ( 22) 00:07:07.141 10.683 - 10.732: 0.9605% ( 144) 00:07:07.141 10.732 - 10.782: 4.4639% ( 631) 00:07:07.141 10.782 - 10.831: 13.0365% ( 1544) 00:07:07.141 10.831 - 10.880: 26.7892% ( 2477) 00:07:07.141 10.880 - 10.929: 43.4568% ( 3002) 00:07:07.141 10.929 - 10.978: 59.7635% ( 2937) 00:07:07.141 10.978 - 11.028: 72.1170% ( 2225) 00:07:07.141 11.028 - 11.077: 79.3237% ( 1298) 00:07:07.141 11.077 - 11.126: 83.0826% ( 677) 00:07:07.141 11.126 - 11.175: 85.2090% ( 383) 00:07:07.141 11.175 - 11.225: 86.3861% ( 212) 00:07:07.141 11.225 - 11.274: 87.1190% ( 132) 00:07:07.141 11.274 - 11.323: 87.6853% ( 102) 00:07:07.141 11.323 - 11.372: 88.0795% ( 71) 00:07:07.141 11.372 - 11.422: 88.4571% ( 68) 00:07:07.141 11.422 - 11.471: 88.9012% ( 80) 00:07:07.141 11.471 - 11.520: 89.2455% ( 62) 00:07:07.141 11.520 - 11.569: 89.6785% ( 78) 00:07:07.141 11.569 - 11.618: 90.0339% ( 64) 00:07:07.141 11.618 - 11.668: 90.4558% ( 76) 00:07:07.141 11.668 - 11.717: 90.9500% ( 89) 00:07:07.141 11.717 - 11.766: 91.4330% ( 87) 00:07:07.141 11.766 - 11.815: 92.1104% ( 122) 00:07:07.141 11.815 - 11.865: 92.8322% ( 130) 00:07:07.141 11.865 - 11.914: 93.5928% ( 137) 00:07:07.141 11.914 - 11.963: 94.2813% ( 124) 00:07:07.141 11.963 - 12.012: 94.9031% ( 112) 00:07:07.141 12.012 - 12.062: 95.4861% ( 105) 00:07:07.142 12.062 - 12.111: 95.9469% ( 83) 00:07:07.142 12.111 - 12.160: 96.3134% ( 66) 00:07:07.142 12.160 - 12.209: 96.5632% ( 45) 00:07:07.142 12.209 - 12.258: 96.7520% ( 34) 00:07:07.142 12.258 - 12.308: 96.8242% ( 13) 00:07:07.142 12.308 - 12.357: 96.9185% ( 17) 00:07:07.142 12.357 - 12.406: 96.9630% ( 8) 00:07:07.142 12.406 - 12.455: 97.0129% ( 9) 00:07:07.142 12.455 - 12.505: 97.0407% ( 5) 00:07:07.142 12.505 - 12.554: 97.0629% ( 4) 00:07:07.142 12.554 - 12.603: 97.0685% ( 1) 00:07:07.142 12.603 - 12.702: 97.0907% ( 4) 00:07:07.142 12.702 - 12.800: 97.1295% ( 7) 00:07:07.142 12.800 - 12.898: 97.2017% ( 13) 00:07:07.142 12.898 - 12.997: 97.3239% ( 22) 00:07:07.142 12.997 - 13.095: 97.4682% ( 26) 00:07:07.142 13.095 - 13.194: 97.6070% ( 25) 00:07:07.142 13.194 - 13.292: 97.7014% ( 17) 00:07:07.142 13.292 - 13.391: 97.7736% ( 13) 00:07:07.142 13.391 - 13.489: 97.8347% ( 11) 00:07:07.142 13.489 - 13.588: 97.8624% ( 5) 00:07:07.142 13.588 - 13.686: 97.8846% ( 4) 00:07:07.142 13.686 - 13.785: 97.9124% ( 5) 00:07:07.142 13.785 - 13.883: 97.9457% ( 6) 00:07:07.142 13.883 - 13.982: 97.9901% ( 8) 00:07:07.142 13.982 - 14.080: 97.9957% ( 1) 00:07:07.142 14.080 - 14.178: 98.0179% ( 4) 00:07:07.142 14.178 - 14.277: 98.0345% ( 3) 00:07:07.142 14.277 - 14.375: 98.0790% ( 8) 00:07:07.142 14.375 - 14.474: 98.0845% ( 1) 00:07:07.142 14.474 - 14.572: 98.0956% ( 2) 00:07:07.142 14.572 - 14.671: 98.1234% ( 5) 00:07:07.142 14.671 - 14.769: 98.1567% ( 6) 00:07:07.142 14.769 - 14.868: 98.1678% ( 2) 00:07:07.142 14.868 - 14.966: 98.2067% ( 7) 00:07:07.142 14.966 - 15.065: 98.2289% ( 4) 00:07:07.142 15.065 - 15.163: 98.2455% ( 3) 00:07:07.142 15.163 - 15.262: 98.2677% ( 4) 00:07:07.142 15.262 - 15.360: 98.3121% ( 8) 00:07:07.142 15.360 - 15.458: 98.3455% ( 6) 00:07:07.142 15.458 - 15.557: 98.3566% ( 2) 00:07:07.142 15.557 - 15.655: 98.3843% ( 5) 00:07:07.142 15.754 - 15.852: 98.3954% ( 2) 00:07:07.142 15.852 - 15.951: 98.4121% ( 3) 00:07:07.142 15.951 - 16.049: 98.4343% ( 4) 00:07:07.142 16.049 - 16.148: 98.4454% ( 2) 00:07:07.142 16.148 - 16.246: 98.4787% ( 6) 00:07:07.142 16.246 - 16.345: 98.4898% ( 2) 00:07:07.142 16.345 - 16.443: 98.5120% ( 4) 00:07:07.142 16.443 - 16.542: 98.6009% ( 16) 00:07:07.142 16.542 - 16.640: 98.7397% ( 25) 00:07:07.142 16.640 - 16.738: 98.8396% ( 18) 00:07:07.142 16.738 - 16.837: 98.9395% ( 18) 00:07:07.142 16.837 - 16.935: 99.0062% ( 12) 00:07:07.142 16.935 - 17.034: 99.1117% ( 19) 00:07:07.142 17.034 - 17.132: 99.2171% ( 19) 00:07:07.142 17.132 - 17.231: 99.3004% ( 15) 00:07:07.142 17.231 - 17.329: 99.3504% ( 9) 00:07:07.142 17.329 - 17.428: 99.3948% ( 8) 00:07:07.142 17.428 - 17.526: 99.4559% ( 11) 00:07:07.142 17.526 - 17.625: 99.5059% ( 9) 00:07:07.142 17.625 - 17.723: 99.5281% ( 4) 00:07:07.142 17.723 - 17.822: 99.5614% ( 6) 00:07:07.142 17.822 - 17.920: 99.6002% ( 7) 00:07:07.142 17.920 - 18.018: 99.6169% ( 3) 00:07:07.142 18.018 - 18.117: 99.6391% ( 4) 00:07:07.142 18.117 - 18.215: 99.6669% ( 5) 00:07:07.142 18.215 - 18.314: 99.6724% ( 1) 00:07:07.142 18.314 - 18.412: 99.6891% ( 3) 00:07:07.142 18.412 - 18.511: 99.7002% ( 2) 00:07:07.142 18.511 - 18.609: 99.7168% ( 3) 00:07:07.142 18.609 - 18.708: 99.7224% ( 1) 00:07:07.142 18.708 - 18.806: 99.7335% ( 2) 00:07:07.142 18.806 - 18.905: 99.7390% ( 1) 00:07:07.142 18.905 - 19.003: 99.7446% ( 1) 00:07:07.142 19.003 - 19.102: 99.7502% ( 1) 00:07:07.142 19.102 - 19.200: 99.7557% ( 1) 00:07:07.142 19.298 - 19.397: 99.7668% ( 2) 00:07:07.142 19.495 - 19.594: 99.7724% ( 1) 00:07:07.142 19.594 - 19.692: 99.7779% ( 1) 00:07:07.142 19.692 - 19.791: 99.7835% ( 1) 00:07:07.142 19.889 - 19.988: 99.7946% ( 2) 00:07:07.142 19.988 - 20.086: 99.8001% ( 1) 00:07:07.142 20.086 - 20.185: 99.8112% ( 2) 00:07:07.142 20.185 - 20.283: 99.8223% ( 2) 00:07:07.142 20.578 - 20.677: 99.8334% ( 2) 00:07:07.142 20.677 - 20.775: 99.8390% ( 1) 00:07:07.142 21.071 - 21.169: 99.8445% ( 1) 00:07:07.142 21.169 - 21.268: 99.8501% ( 1) 00:07:07.142 21.366 - 21.465: 99.8556% ( 1) 00:07:07.142 21.563 - 21.662: 99.8612% ( 1) 00:07:07.142 21.760 - 21.858: 99.8667% ( 1) 00:07:07.142 22.745 - 22.843: 99.8723% ( 1) 00:07:07.142 23.237 - 23.335: 99.8779% ( 1) 00:07:07.142 23.729 - 23.828: 99.8834% ( 1) 00:07:07.142 24.222 - 24.320: 99.8890% ( 1) 00:07:07.142 24.714 - 24.812: 99.8945% ( 1) 00:07:07.142 25.108 - 25.206: 99.9001% ( 1) 00:07:07.142 25.403 - 25.600: 99.9056% ( 1) 00:07:07.142 25.600 - 25.797: 99.9112% ( 1) 00:07:07.142 25.994 - 26.191: 99.9167% ( 1) 00:07:07.142 26.388 - 26.585: 99.9223% ( 1) 00:07:07.142 26.978 - 27.175: 99.9278% ( 1) 00:07:07.142 29.932 - 30.129: 99.9334% ( 1) 00:07:07.142 31.114 - 31.311: 99.9389% ( 1) 00:07:07.142 34.658 - 34.855: 99.9445% ( 1) 00:07:07.142 36.037 - 36.234: 99.9500% ( 1) 00:07:07.142 36.825 - 37.022: 99.9556% ( 1) 00:07:07.142 42.142 - 42.338: 99.9611% ( 1) 00:07:07.142 45.686 - 45.883: 99.9667% ( 1) 00:07:07.142 49.231 - 49.428: 99.9722% ( 1) 00:07:07.142 50.806 - 51.200: 99.9778% ( 1) 00:07:07.142 51.200 - 51.594: 99.9833% ( 1) 00:07:07.142 51.988 - 52.382: 99.9889% ( 1) 00:07:07.142 64.197 - 64.591: 99.9944% ( 1) 00:07:07.142 204.800 - 206.375: 100.0000% ( 1) 00:07:07.142 00:07:07.142 Complete histogram 00:07:07.142 ================== 00:07:07.142 Range in us Cumulative Count 00:07:07.142 7.188 - 7.237: 0.2998% ( 54) 00:07:07.142 7.237 - 7.286: 5.0081% ( 848) 00:07:07.142 7.286 - 7.335: 22.1587% ( 3089) 00:07:07.142 7.335 - 7.385: 48.2372% ( 4697) 00:07:07.142 7.385 - 7.434: 70.8456% ( 4072) 00:07:07.142 7.434 - 7.483: 84.0375% ( 2376) 00:07:07.142 7.483 - 7.532: 90.4947% ( 1163) 00:07:07.142 7.532 - 7.582: 93.9259% ( 618) 00:07:07.142 7.582 - 7.631: 95.8026% ( 338) 00:07:07.142 7.631 - 7.680: 96.7020% ( 162) 00:07:07.142 7.680 - 7.729: 97.2295% ( 95) 00:07:07.142 7.729 - 7.778: 97.5237% ( 53) 00:07:07.142 7.778 - 7.828: 97.6126% ( 16) 00:07:07.142 7.828 - 7.877: 97.6903% ( 14) 00:07:07.142 7.877 - 7.926: 97.7347% ( 8) 00:07:07.142 7.926 - 7.975: 97.7736% ( 7) 00:07:07.142 7.975 - 8.025: 97.8236% ( 9) 00:07:07.142 8.025 - 8.074: 97.8735% ( 9) 00:07:07.142 8.074 - 8.123: 97.9124% ( 7) 00:07:07.142 8.123 - 8.172: 97.9846% ( 13) 00:07:07.142 8.172 - 8.222: 98.0567% ( 13) 00:07:07.142 8.222 - 8.271: 98.1400% ( 15) 00:07:07.142 8.271 - 8.320: 98.2289% ( 16) 00:07:07.142 8.320 - 8.369: 98.2677% ( 7) 00:07:07.142 8.369 - 8.418: 98.2844% ( 3) 00:07:07.142 8.418 - 8.468: 98.2955% ( 2) 00:07:07.142 8.468 - 8.517: 98.3121% ( 3) 00:07:07.142 8.517 - 8.566: 98.3177% ( 1) 00:07:07.142 8.615 - 8.665: 98.3232% ( 1) 00:07:07.142 9.157 - 9.206: 98.3288% ( 1) 00:07:07.142 9.452 - 9.502: 98.3344% ( 1) 00:07:07.142 9.797 - 9.846: 98.3399% ( 1) 00:07:07.142 9.846 - 9.895: 98.3510% ( 2) 00:07:07.142 10.043 - 10.092: 98.3621% ( 2) 00:07:07.142 10.092 - 10.142: 98.3677% ( 1) 00:07:07.142 10.142 - 10.191: 98.3732% ( 1) 00:07:07.142 10.191 - 10.240: 98.3954% ( 4) 00:07:07.142 10.240 - 10.289: 98.4065% ( 2) 00:07:07.142 10.338 - 10.388: 98.4176% ( 2) 00:07:07.142 10.388 - 10.437: 98.4287% ( 2) 00:07:07.142 10.486 - 10.535: 98.4343% ( 1) 00:07:07.142 10.585 - 10.634: 98.4398% ( 1) 00:07:07.142 10.634 - 10.683: 98.4509% ( 2) 00:07:07.142 10.683 - 10.732: 98.4565% ( 1) 00:07:07.142 10.732 - 10.782: 98.4621% ( 1) 00:07:07.142 10.831 - 10.880: 98.4676% ( 1) 00:07:07.142 11.126 - 11.175: 98.4732% ( 1) 00:07:07.142 11.815 - 11.865: 98.4787% ( 1) 00:07:07.142 12.258 - 12.308: 98.4843% ( 1) 00:07:07.142 12.406 - 12.455: 98.4954% ( 2) 00:07:07.142 12.505 - 12.554: 98.5009% ( 1) 00:07:07.142 12.603 - 12.702: 98.5065% ( 1) 00:07:07.142 12.702 - 12.800: 98.5120% ( 1) 00:07:07.142 12.800 - 12.898: 98.5509% ( 7) 00:07:07.142 12.898 - 12.997: 98.6342% ( 15) 00:07:07.142 12.997 - 13.095: 98.7230% ( 16) 00:07:07.142 13.095 - 13.194: 98.8063% ( 15) 00:07:07.142 13.194 - 13.292: 98.8729% ( 12) 00:07:07.142 13.292 - 13.391: 98.9562% ( 15) 00:07:07.142 13.391 - 13.489: 99.0561% ( 18) 00:07:07.142 13.489 - 13.588: 99.1894% ( 24) 00:07:07.142 13.588 - 13.686: 99.2782% ( 16) 00:07:07.142 13.686 - 13.785: 99.3671% ( 16) 00:07:07.142 13.785 - 13.883: 99.4226% ( 10) 00:07:07.142 13.883 - 13.982: 99.4614% ( 7) 00:07:07.142 13.982 - 14.080: 99.5059% ( 8) 00:07:07.142 14.080 - 14.178: 99.5558% ( 9) 00:07:07.142 14.178 - 14.277: 99.5614% ( 1) 00:07:07.142 14.277 - 14.375: 99.5725% ( 2) 00:07:07.143 14.375 - 14.474: 99.6058% ( 6) 00:07:07.143 14.474 - 14.572: 99.6558% ( 9) 00:07:07.143 14.572 - 14.671: 99.6669% ( 2) 00:07:07.143 14.671 - 14.769: 99.6835% ( 3) 00:07:07.143 14.769 - 14.868: 99.6891% ( 1) 00:07:07.143 14.868 - 14.966: 99.7002% ( 2) 00:07:07.143 14.966 - 15.065: 99.7113% ( 2) 00:07:07.143 15.163 - 15.262: 99.7168% ( 1) 00:07:07.143 15.262 - 15.360: 99.7224% ( 1) 00:07:07.143 15.360 - 15.458: 99.7279% ( 1) 00:07:07.143 15.458 - 15.557: 99.7335% ( 1) 00:07:07.143 15.655 - 15.754: 99.7390% ( 1) 00:07:07.143 15.754 - 15.852: 99.7502% ( 2) 00:07:07.143 15.951 - 16.049: 99.7613% ( 2) 00:07:07.143 16.049 - 16.148: 99.7668% ( 1) 00:07:07.143 16.246 - 16.345: 99.7724% ( 1) 00:07:07.143 16.542 - 16.640: 99.7779% ( 1) 00:07:07.143 16.640 - 16.738: 99.7835% ( 1) 00:07:07.143 16.738 - 16.837: 99.7890% ( 1) 00:07:07.143 16.837 - 16.935: 99.7946% ( 1) 00:07:07.143 17.034 - 17.132: 99.8001% ( 1) 00:07:07.143 17.132 - 17.231: 99.8057% ( 1) 00:07:07.143 17.231 - 17.329: 99.8112% ( 1) 00:07:07.143 17.329 - 17.428: 99.8168% ( 1) 00:07:07.143 17.625 - 17.723: 99.8334% ( 3) 00:07:07.143 17.723 - 17.822: 99.8445% ( 2) 00:07:07.143 17.822 - 17.920: 99.8501% ( 1) 00:07:07.143 18.117 - 18.215: 99.8556% ( 1) 00:07:07.143 18.215 - 18.314: 99.8612% ( 1) 00:07:07.143 18.314 - 18.412: 99.8667% ( 1) 00:07:07.143 18.806 - 18.905: 99.8723% ( 1) 00:07:07.143 19.003 - 19.102: 99.8779% ( 1) 00:07:07.143 19.102 - 19.200: 99.8834% ( 1) 00:07:07.143 19.200 - 19.298: 99.8890% ( 1) 00:07:07.143 19.298 - 19.397: 99.8945% ( 1) 00:07:07.143 19.692 - 19.791: 99.9001% ( 1) 00:07:07.143 19.988 - 20.086: 99.9056% ( 1) 00:07:07.143 20.283 - 20.382: 99.9223% ( 3) 00:07:07.143 20.480 - 20.578: 99.9278% ( 1) 00:07:07.143 20.874 - 20.972: 99.9334% ( 1) 00:07:07.143 21.169 - 21.268: 99.9389% ( 1) 00:07:07.143 22.449 - 22.548: 99.9445% ( 1) 00:07:07.143 22.548 - 22.646: 99.9500% ( 1) 00:07:07.143 22.843 - 22.942: 99.9556% ( 1) 00:07:07.143 23.040 - 23.138: 99.9611% ( 1) 00:07:07.143 23.335 - 23.434: 99.9667% ( 1) 00:07:07.143 25.403 - 25.600: 99.9722% ( 1) 00:07:07.143 28.357 - 28.554: 99.9778% ( 1) 00:07:07.143 35.249 - 35.446: 99.9833% ( 1) 00:07:07.143 42.929 - 43.126: 99.9889% ( 1) 00:07:07.143 58.683 - 59.077: 99.9944% ( 1) 00:07:07.143 193.772 - 194.560: 100.0000% ( 1) 00:07:07.143 00:07:07.143 00:07:07.143 real 0m1.211s 00:07:07.143 user 0m1.068s 00:07:07.143 sys 0m0.097s 00:07:07.143 01:20:02 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.143 01:20:02 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:07.143 ************************************ 00:07:07.143 END TEST nvme_overhead 00:07:07.143 ************************************ 00:07:07.143 01:20:02 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:07.143 01:20:02 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:07.143 01:20:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.143 01:20:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.143 ************************************ 00:07:07.143 START TEST nvme_arbitration 00:07:07.143 ************************************ 00:07:07.143 01:20:02 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:10.438 Initializing NVMe Controllers 00:07:10.438 Attached to 0000:00:10.0 00:07:10.438 Attached to 0000:00:11.0 00:07:10.438 Attached to 0000:00:13.0 00:07:10.438 Attached to 0000:00:12.0 00:07:10.438 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:10.438 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:10.439 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:10.439 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:10.439 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:10.439 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:10.439 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:10.439 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:10.439 Initialization complete. Launching workers. 00:07:10.439 Starting thread on core 2 with urgent priority queue 00:07:10.439 Starting thread on core 3 with urgent priority queue 00:07:10.439 Starting thread on core 0 with urgent priority queue 00:07:10.439 Starting thread on core 1 with urgent priority queue 00:07:10.439 QEMU NVMe Ctrl (12340 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:10.439 QEMU NVMe Ctrl (12342 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:10.439 QEMU NVMe Ctrl (12341 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:10.439 QEMU NVMe Ctrl (12342 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:10.439 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:07:10.439 QEMU NVMe Ctrl (12342 ) core 3: 981.33 IO/s 101.90 secs/100000 ios 00:07:10.439 ======================================================== 00:07:10.439 00:07:10.439 00:07:10.439 real 0m3.277s 00:07:10.439 user 0m9.181s 00:07:10.439 sys 0m0.106s 00:07:10.439 01:20:06 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.439 01:20:06 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:10.439 ************************************ 00:07:10.439 END TEST nvme_arbitration 00:07:10.439 ************************************ 00:07:10.439 01:20:06 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:10.439 01:20:06 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:10.439 01:20:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.439 01:20:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.439 ************************************ 00:07:10.439 START TEST nvme_single_aen 00:07:10.439 ************************************ 00:07:10.439 01:20:06 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:10.700 Asynchronous Event Request test 00:07:10.700 Attached to 0000:00:10.0 00:07:10.700 Attached to 0000:00:11.0 00:07:10.700 Attached to 0000:00:13.0 00:07:10.700 Attached to 0000:00:12.0 00:07:10.700 Reset controller to setup AER completions for this process 00:07:10.700 Registering asynchronous event callbacks... 00:07:10.700 Getting orig temperature thresholds of all controllers 00:07:10.700 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:10.700 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:10.700 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:10.700 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:10.700 Setting all controllers temperature threshold low to trigger AER 00:07:10.700 Waiting for all controllers temperature threshold to be set lower 00:07:10.700 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:10.700 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:10.700 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:10.700 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:10.700 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:10.700 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:10.700 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:10.700 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:10.700 Waiting for all controllers to trigger AER and reset threshold 00:07:10.700 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:10.700 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:10.700 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:10.700 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:10.700 Cleaning up... 00:07:10.700 00:07:10.700 real 0m0.205s 00:07:10.700 user 0m0.057s 00:07:10.700 sys 0m0.101s 00:07:10.700 01:20:06 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.700 01:20:06 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 ************************************ 00:07:10.700 END TEST nvme_single_aen 00:07:10.700 ************************************ 00:07:10.700 01:20:06 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:10.700 01:20:06 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.700 01:20:06 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.700 01:20:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.700 ************************************ 00:07:10.700 START TEST nvme_doorbell_aers 00:07:10.700 ************************************ 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:10.700 01:20:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:10.960 [2024-09-28 01:20:06.748875] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:20.950 Executing: test_write_invalid_db 00:07:20.950 Waiting for AER completion... 00:07:20.950 Failure: test_write_invalid_db 00:07:20.950 00:07:20.950 Executing: test_invalid_db_write_overflow_sq 00:07:20.950 Waiting for AER completion... 00:07:20.950 Failure: test_invalid_db_write_overflow_sq 00:07:20.950 00:07:20.950 Executing: test_invalid_db_write_overflow_cq 00:07:20.950 Waiting for AER completion... 00:07:20.950 Failure: test_invalid_db_write_overflow_cq 00:07:20.950 00:07:20.950 01:20:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:20.950 01:20:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:20.950 [2024-09-28 01:20:16.750596] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:30.952 Executing: test_write_invalid_db 00:07:30.952 Waiting for AER completion... 00:07:30.952 Failure: test_write_invalid_db 00:07:30.952 00:07:30.952 Executing: test_invalid_db_write_overflow_sq 00:07:30.952 Waiting for AER completion... 00:07:30.952 Failure: test_invalid_db_write_overflow_sq 00:07:30.952 00:07:30.952 Executing: test_invalid_db_write_overflow_cq 00:07:30.952 Waiting for AER completion... 00:07:30.952 Failure: test_invalid_db_write_overflow_cq 00:07:30.952 00:07:30.952 01:20:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:30.952 01:20:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:30.952 [2024-09-28 01:20:26.784114] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:40.924 Executing: test_write_invalid_db 00:07:40.924 Waiting for AER completion... 00:07:40.924 Failure: test_write_invalid_db 00:07:40.924 00:07:40.924 Executing: test_invalid_db_write_overflow_sq 00:07:40.924 Waiting for AER completion... 00:07:40.924 Failure: test_invalid_db_write_overflow_sq 00:07:40.924 00:07:40.924 Executing: test_invalid_db_write_overflow_cq 00:07:40.924 Waiting for AER completion... 00:07:40.924 Failure: test_invalid_db_write_overflow_cq 00:07:40.924 00:07:40.924 01:20:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:40.924 01:20:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:40.924 [2024-09-28 01:20:36.837450] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:50.890 Executing: test_write_invalid_db 00:07:50.890 Waiting for AER completion... 00:07:50.890 Failure: test_write_invalid_db 00:07:50.890 00:07:50.890 Executing: test_invalid_db_write_overflow_sq 00:07:50.890 Waiting for AER completion... 00:07:50.890 Failure: test_invalid_db_write_overflow_sq 00:07:50.890 00:07:50.890 Executing: test_invalid_db_write_overflow_cq 00:07:50.890 Waiting for AER completion... 00:07:50.890 Failure: test_invalid_db_write_overflow_cq 00:07:50.890 00:07:50.890 00:07:50.890 real 0m40.193s 00:07:50.890 user 0m34.193s 00:07:50.890 sys 0m5.643s 00:07:50.890 01:20:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.890 01:20:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:07:50.890 ************************************ 00:07:50.890 END TEST nvme_doorbell_aers 00:07:50.890 ************************************ 00:07:50.890 01:20:46 nvme -- nvme/nvme.sh@97 -- # uname 00:07:50.890 01:20:46 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:07:50.890 01:20:46 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:50.890 01:20:46 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:50.890 01:20:46 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.890 01:20:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.890 ************************************ 00:07:50.890 START TEST nvme_multi_aen 00:07:50.890 ************************************ 00:07:50.890 01:20:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:51.149 [2024-09-28 01:20:46.875009] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.875065] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.875075] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.876584] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.876623] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.876632] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.877666] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.877693] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.877701] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.878713] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.878738] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 [2024-09-28 01:20:46.878745] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63570) is not found. Dropping the request. 00:07:51.149 Child process pid: 64096 00:07:51.149 [Child] Asynchronous Event Request test 00:07:51.149 [Child] Attached to 0000:00:10.0 00:07:51.149 [Child] Attached to 0000:00:11.0 00:07:51.149 [Child] Attached to 0000:00:13.0 00:07:51.149 [Child] Attached to 0000:00:12.0 00:07:51.149 [Child] Registering asynchronous event callbacks... 00:07:51.149 [Child] Getting orig temperature thresholds of all controllers 00:07:51.149 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.149 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.149 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.149 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.149 [Child] Waiting for all controllers to trigger AER and reset threshold 00:07:51.149 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.149 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.149 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.149 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.149 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.149 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.149 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.149 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.149 [Child] Cleaning up... 00:07:51.407 Asynchronous Event Request test 00:07:51.407 Attached to 0000:00:10.0 00:07:51.407 Attached to 0000:00:11.0 00:07:51.407 Attached to 0000:00:13.0 00:07:51.407 Attached to 0000:00:12.0 00:07:51.407 Reset controller to setup AER completions for this process 00:07:51.407 Registering asynchronous event callbacks... 00:07:51.407 Getting orig temperature thresholds of all controllers 00:07:51.407 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.407 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.407 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.407 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.407 Setting all controllers temperature threshold low to trigger AER 00:07:51.407 Waiting for all controllers temperature threshold to be set lower 00:07:51.407 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.407 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:51.407 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.407 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:51.407 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.407 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:51.407 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.407 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:51.407 Waiting for all controllers to trigger AER and reset threshold 00:07:51.407 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.407 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.407 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.407 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.407 Cleaning up... 00:07:51.407 00:07:51.407 real 0m0.406s 00:07:51.407 user 0m0.125s 00:07:51.407 sys 0m0.186s 00:07:51.407 01:20:47 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.407 ************************************ 00:07:51.407 END TEST nvme_multi_aen 00:07:51.407 ************************************ 00:07:51.407 01:20:47 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:07:51.407 01:20:47 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:51.407 01:20:47 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:51.407 01:20:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.407 01:20:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.407 ************************************ 00:07:51.407 START TEST nvme_startup 00:07:51.407 ************************************ 00:07:51.407 01:20:47 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:51.407 Initializing NVMe Controllers 00:07:51.407 Attached to 0000:00:10.0 00:07:51.407 Attached to 0000:00:11.0 00:07:51.407 Attached to 0000:00:13.0 00:07:51.407 Attached to 0000:00:12.0 00:07:51.407 Initialization complete. 00:07:51.407 Time used:135472.031 (us). 00:07:51.407 00:07:51.407 real 0m0.195s 00:07:51.407 user 0m0.071s 00:07:51.407 sys 0m0.085s 00:07:51.407 01:20:47 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.407 01:20:47 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:07:51.407 ************************************ 00:07:51.407 END TEST nvme_startup 00:07:51.407 ************************************ 00:07:51.666 01:20:47 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:07:51.666 01:20:47 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.666 01:20:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.666 01:20:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.666 ************************************ 00:07:51.666 START TEST nvme_multi_secondary 00:07:51.666 ************************************ 00:07:51.666 01:20:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:07:51.666 01:20:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64141 00:07:51.666 01:20:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64142 00:07:51.666 01:20:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:07:51.666 01:20:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:07:51.666 01:20:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:54.958 Initializing NVMe Controllers 00:07:54.958 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.958 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.958 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.958 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.958 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:54.958 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:54.958 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:54.958 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:54.958 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:54.958 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:54.958 Initialization complete. Launching workers. 00:07:54.958 ======================================================== 00:07:54.958 Latency(us) 00:07:54.958 Device Information : IOPS MiB/s Average min max 00:07:54.958 PCIE (0000:00:10.0) NSID 1 from core 2: 3352.01 13.09 4771.33 795.67 11859.94 00:07:54.958 PCIE (0000:00:11.0) NSID 1 from core 2: 3352.01 13.09 4773.14 821.20 12793.30 00:07:54.958 PCIE (0000:00:13.0) NSID 1 from core 2: 3352.01 13.09 4773.32 826.32 12850.88 00:07:54.958 PCIE (0000:00:12.0) NSID 1 from core 2: 3352.01 13.09 4773.28 825.56 12561.47 00:07:54.958 PCIE (0000:00:12.0) NSID 2 from core 2: 3352.01 13.09 4773.25 831.16 12520.59 00:07:54.958 PCIE (0000:00:12.0) NSID 3 from core 2: 3352.01 13.09 4773.23 830.97 12224.53 00:07:54.958 ======================================================== 00:07:54.958 Total : 20112.04 78.56 4772.93 795.67 12850.88 00:07:54.958 00:07:54.958 01:20:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64141 00:07:54.958 Initializing NVMe Controllers 00:07:54.958 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.958 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.958 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.958 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.958 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:54.958 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:54.958 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:54.958 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:54.958 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:54.958 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:54.958 Initialization complete. Launching workers. 00:07:54.958 ======================================================== 00:07:54.958 Latency(us) 00:07:54.958 Device Information : IOPS MiB/s Average min max 00:07:54.958 PCIE (0000:00:10.0) NSID 1 from core 1: 8085.09 31.58 1977.65 713.06 13900.97 00:07:54.958 PCIE (0000:00:11.0) NSID 1 from core 1: 8085.09 31.58 1978.55 725.44 13591.09 00:07:54.958 PCIE (0000:00:13.0) NSID 1 from core 1: 8085.09 31.58 1978.54 721.45 13139.96 00:07:54.958 PCIE (0000:00:12.0) NSID 1 from core 1: 8085.09 31.58 1978.59 732.13 13288.54 00:07:54.958 PCIE (0000:00:12.0) NSID 2 from core 1: 8085.09 31.58 1978.73 724.81 13987.34 00:07:54.958 PCIE (0000:00:12.0) NSID 3 from core 1: 8085.09 31.58 1978.75 733.70 14214.08 00:07:54.958 ======================================================== 00:07:54.958 Total : 48510.56 189.49 1978.47 713.06 14214.08 00:07:54.958 00:07:56.857 Initializing NVMe Controllers 00:07:56.857 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.857 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.857 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:56.857 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:56.857 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:56.857 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:56.857 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:56.857 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:56.857 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:56.857 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:56.857 Initialization complete. Launching workers. 00:07:56.857 ======================================================== 00:07:56.857 Latency(us) 00:07:56.857 Device Information : IOPS MiB/s Average min max 00:07:56.857 PCIE (0000:00:10.0) NSID 1 from core 0: 11473.63 44.82 1393.31 668.34 7104.29 00:07:56.857 PCIE (0000:00:11.0) NSID 1 from core 0: 11473.63 44.82 1394.11 684.25 6978.10 00:07:56.857 PCIE (0000:00:13.0) NSID 1 from core 0: 11473.63 44.82 1394.08 626.96 6995.80 00:07:56.857 PCIE (0000:00:12.0) NSID 1 from core 0: 11473.63 44.82 1394.06 612.73 6007.54 00:07:56.857 PCIE (0000:00:12.0) NSID 2 from core 0: 11473.63 44.82 1394.03 598.20 6359.64 00:07:56.857 PCIE (0000:00:12.0) NSID 3 from core 0: 11473.63 44.82 1394.01 574.55 7110.12 00:07:56.857 ======================================================== 00:07:56.857 Total : 68841.77 268.91 1393.94 574.55 7110.12 00:07:56.857 00:07:56.857 01:20:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64142 00:07:56.857 01:20:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64215 00:07:56.857 01:20:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:07:56.857 01:20:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64216 00:07:56.857 01:20:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:07:56.857 01:20:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:00.135 Initializing NVMe Controllers 00:08:00.135 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.135 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.135 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.135 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.135 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:00.135 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:00.135 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:00.135 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:00.135 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:00.135 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:00.135 Initialization complete. Launching workers. 00:08:00.135 ======================================================== 00:08:00.135 Latency(us) 00:08:00.135 Device Information : IOPS MiB/s Average min max 00:08:00.135 PCIE (0000:00:10.0) NSID 1 from core 0: 7850.27 30.67 2036.77 703.34 6032.18 00:08:00.135 PCIE (0000:00:11.0) NSID 1 from core 0: 7850.27 30.67 2037.81 724.15 5813.82 00:08:00.135 PCIE (0000:00:13.0) NSID 1 from core 0: 7850.27 30.67 2037.84 724.19 5370.33 00:08:00.135 PCIE (0000:00:12.0) NSID 1 from core 0: 7850.27 30.67 2037.87 727.43 5322.91 00:08:00.135 PCIE (0000:00:12.0) NSID 2 from core 0: 7850.27 30.67 2038.10 731.98 5867.71 00:08:00.135 PCIE (0000:00:12.0) NSID 3 from core 0: 7850.27 30.67 2038.18 742.49 6146.93 00:08:00.135 ======================================================== 00:08:00.135 Total : 47101.61 183.99 2037.76 703.34 6146.93 00:08:00.135 00:08:00.391 Initializing NVMe Controllers 00:08:00.391 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.391 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.391 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.391 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.391 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:00.391 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:00.391 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:00.391 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:00.391 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:00.391 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:00.391 Initialization complete. Launching workers. 00:08:00.391 ======================================================== 00:08:00.391 Latency(us) 00:08:00.391 Device Information : IOPS MiB/s Average min max 00:08:00.391 PCIE (0000:00:10.0) NSID 1 from core 1: 7869.58 30.74 2031.82 709.76 11876.02 00:08:00.391 PCIE (0000:00:11.0) NSID 1 from core 1: 7869.58 30.74 2032.77 722.62 11475.39 00:08:00.391 PCIE (0000:00:13.0) NSID 1 from core 1: 7869.58 30.74 2032.77 732.19 11152.74 00:08:00.391 PCIE (0000:00:12.0) NSID 1 from core 1: 7869.58 30.74 2032.77 732.94 10891.68 00:08:00.391 PCIE (0000:00:12.0) NSID 2 from core 1: 7869.58 30.74 2032.77 710.22 11149.09 00:08:00.391 PCIE (0000:00:12.0) NSID 3 from core 1: 7869.58 30.74 2032.77 715.53 11382.60 00:08:00.391 ======================================================== 00:08:00.391 Total : 47217.46 184.44 2032.61 709.76 11876.02 00:08:00.391 00:08:02.291 Initializing NVMe Controllers 00:08:02.291 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:02.291 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:02.291 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:02.291 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:02.291 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:02.291 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:02.291 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:02.291 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:02.291 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:02.291 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:02.291 Initialization complete. Launching workers. 00:08:02.291 ======================================================== 00:08:02.291 Latency(us) 00:08:02.291 Device Information : IOPS MiB/s Average min max 00:08:02.291 PCIE (0000:00:10.0) NSID 1 from core 2: 4744.95 18.53 3369.83 715.92 12465.76 00:08:02.291 PCIE (0000:00:11.0) NSID 1 from core 2: 4744.95 18.53 3371.43 726.28 12472.49 00:08:02.291 PCIE (0000:00:13.0) NSID 1 from core 2: 4744.95 18.53 3371.55 726.70 12188.60 00:08:02.291 PCIE (0000:00:12.0) NSID 1 from core 2: 4744.95 18.53 3371.49 629.46 13011.19 00:08:02.291 PCIE (0000:00:12.0) NSID 2 from core 2: 4744.95 18.53 3371.27 614.55 13108.71 00:08:02.291 PCIE (0000:00:12.0) NSID 3 from core 2: 4744.95 18.53 3371.05 588.78 12789.59 00:08:02.291 ======================================================== 00:08:02.291 Total : 28469.71 111.21 3371.10 588.78 13108.71 00:08:02.291 00:08:02.291 01:20:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64215 00:08:02.291 01:20:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64216 00:08:02.291 00:08:02.291 real 0m10.733s 00:08:02.291 user 0m18.335s 00:08:02.291 sys 0m0.613s 00:08:02.291 01:20:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.291 01:20:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:02.291 ************************************ 00:08:02.291 END TEST nvme_multi_secondary 00:08:02.291 ************************************ 00:08:02.291 01:20:58 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:02.291 01:20:58 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:02.291 01:20:58 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/63179 ]] 00:08:02.291 01:20:58 nvme -- common/autotest_common.sh@1090 -- # kill 63179 00:08:02.291 01:20:58 nvme -- common/autotest_common.sh@1091 -- # wait 63179 00:08:02.291 [2024-09-28 01:20:58.138138] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.138863] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.138929] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.138953] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.141799] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.141854] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.141873] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.141895] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.144696] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.144748] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.144768] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.144790] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.147497] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.147559] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.147578] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.291 [2024-09-28 01:20:58.147600] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64095) is not found. Dropping the request. 00:08:02.550 01:20:58 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:02.550 01:20:58 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:02.551 01:20:58 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:02.551 01:20:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.551 01:20:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.551 01:20:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.551 ************************************ 00:08:02.551 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:02.551 ************************************ 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:02.551 * Looking for test storage... 00:08:02.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.551 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.810 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:02.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.811 --rc genhtml_branch_coverage=1 00:08:02.811 --rc genhtml_function_coverage=1 00:08:02.811 --rc genhtml_legend=1 00:08:02.811 --rc geninfo_all_blocks=1 00:08:02.811 --rc geninfo_unexecuted_blocks=1 00:08:02.811 00:08:02.811 ' 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:02.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.811 --rc genhtml_branch_coverage=1 00:08:02.811 --rc genhtml_function_coverage=1 00:08:02.811 --rc genhtml_legend=1 00:08:02.811 --rc geninfo_all_blocks=1 00:08:02.811 --rc geninfo_unexecuted_blocks=1 00:08:02.811 00:08:02.811 ' 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:02.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.811 --rc genhtml_branch_coverage=1 00:08:02.811 --rc genhtml_function_coverage=1 00:08:02.811 --rc genhtml_legend=1 00:08:02.811 --rc geninfo_all_blocks=1 00:08:02.811 --rc geninfo_unexecuted_blocks=1 00:08:02.811 00:08:02.811 ' 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:02.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.811 --rc genhtml_branch_coverage=1 00:08:02.811 --rc genhtml_function_coverage=1 00:08:02.811 --rc genhtml_legend=1 00:08:02.811 --rc geninfo_all_blocks=1 00:08:02.811 --rc geninfo_unexecuted_blocks=1 00:08:02.811 00:08:02.811 ' 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64379 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64379 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64379 ']' 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.811 01:20:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:02.811 [2024-09-28 01:20:58.632830] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:02.811 [2024-09-28 01:20:58.632952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64379 ] 00:08:03.069 [2024-09-28 01:20:58.793637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.069 [2024-09-28 01:20:58.981812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.069 [2024-09-28 01:20:58.982180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.069 [2024-09-28 01:20:58.982499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.069 [2024-09-28 01:20:58.982591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.634 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.634 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:03.634 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:03.634 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.634 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:03.891 nvme0n1 00:08:03.891 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.891 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:03.891 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_pblWD.txt 00:08:03.892 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:03.892 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.892 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:03.892 true 00:08:03.892 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.892 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:03.892 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727486459 00:08:03.892 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64402 00:08:03.892 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:03.892 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:03.892 01:20:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:05.846 [2024-09-28 01:21:01.657333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:05.846 [2024-09-28 01:21:01.657883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:05.846 [2024-09-28 01:21:01.657930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:05.846 [2024-09-28 01:21:01.657945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:05.846 [2024-09-28 01:21:01.659632] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.846 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64402 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64402 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64402 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_pblWD.txt 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_pblWD.txt 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64379 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64379 ']' 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64379 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64379 00:08:05.846 killing process with pid 64379 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64379' 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64379 00:08:05.846 01:21:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64379 00:08:07.221 01:21:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:07.221 01:21:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:07.221 00:08:07.221 real 0m4.717s 00:08:07.221 user 0m16.265s 00:08:07.221 sys 0m0.508s 00:08:07.221 01:21:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.221 ************************************ 00:08:07.221 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:07.221 ************************************ 00:08:07.221 01:21:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:07.221 01:21:03 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:07.221 01:21:03 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:07.221 01:21:03 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.221 01:21:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.221 01:21:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.221 ************************************ 00:08:07.221 START TEST nvme_fio 00:08:07.221 ************************************ 00:08:07.221 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:07.221 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:07.221 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:07.221 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:07.221 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:07.221 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:07.221 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:07.221 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:07.221 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:07.481 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:07.481 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:07.481 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:07.481 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:07.481 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:07.481 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:07.481 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:07.481 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:07.481 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:07.740 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:07.740 01:21:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:07.740 01:21:03 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:07.999 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:07.999 fio-3.35 00:08:07.999 Starting 1 thread 00:08:14.567 00:08:14.567 test: (groupid=0, jobs=1): err= 0: pid=64537: Sat Sep 28 01:21:09 2024 00:08:14.567 read: IOPS=22.7k, BW=88.5MiB/s (92.8MB/s)(177MiB/2001msec) 00:08:14.567 slat (nsec): min=3312, max=77206, avg=5056.42, stdev=2417.91 00:08:14.567 clat (usec): min=239, max=8272, avg=2820.14, stdev=962.08 00:08:14.567 lat (usec): min=243, max=8287, avg=2825.19, stdev=963.35 00:08:14.567 clat percentiles (usec): 00:08:14.567 | 1.00th=[ 1614], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2311], 00:08:14.567 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:08:14.567 | 70.00th=[ 2704], 80.00th=[ 2966], 90.00th=[ 4293], 95.00th=[ 5276], 00:08:14.567 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7373], 99.95th=[ 7701], 00:08:14.567 | 99.99th=[ 8029] 00:08:14.567 bw ( KiB/s): min=81480, max=96152, per=96.56%, avg=87514.67, stdev=7674.46, samples=3 00:08:14.567 iops : min=20370, max=24038, avg=21878.67, stdev=1918.61, samples=3 00:08:14.567 write: IOPS=22.5k, BW=88.0MiB/s (92.3MB/s)(176MiB/2001msec); 0 zone resets 00:08:14.567 slat (nsec): min=3440, max=85493, avg=5216.77, stdev=2357.72 00:08:14.567 clat (usec): min=217, max=8129, avg=2822.69, stdev=957.60 00:08:14.567 lat (usec): min=221, max=8143, avg=2827.90, stdev=958.80 00:08:14.567 clat percentiles (usec): 00:08:14.567 | 1.00th=[ 1614], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2311], 00:08:14.567 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:08:14.567 | 70.00th=[ 2704], 80.00th=[ 2966], 90.00th=[ 4293], 95.00th=[ 5276], 00:08:14.567 | 99.00th=[ 6259], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[ 7635], 00:08:14.567 | 99.99th=[ 7898] 00:08:14.567 bw ( KiB/s): min=82872, max=95856, per=97.28%, avg=87653.33, stdev=7136.19, samples=3 00:08:14.567 iops : min=20718, max=23964, avg=21913.33, stdev=1784.05, samples=3 00:08:14.567 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:08:14.567 lat (msec) : 2=3.37%, 4=85.07%, 10=11.48% 00:08:14.567 cpu : usr=99.10%, sys=0.05%, ctx=6, majf=0, minf=608 00:08:14.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:14.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:14.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:14.567 issued rwts: total=45337,45076,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:14.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:14.568 00:08:14.568 Run status group 0 (all jobs): 00:08:14.568 READ: bw=88.5MiB/s (92.8MB/s), 88.5MiB/s-88.5MiB/s (92.8MB/s-92.8MB/s), io=177MiB (186MB), run=2001-2001msec 00:08:14.568 WRITE: bw=88.0MiB/s (92.3MB/s), 88.0MiB/s-88.0MiB/s (92.3MB/s-92.3MB/s), io=176MiB (185MB), run=2001-2001msec 00:08:14.568 ----------------------------------------------------- 00:08:14.568 Suppressions used: 00:08:14.568 count bytes template 00:08:14.568 1 32 /usr/src/fio/parse.c 00:08:14.568 1 8 libtcmalloc_minimal.so 00:08:14.568 ----------------------------------------------------- 00:08:14.568 00:08:14.568 01:21:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:14.568 01:21:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:14.568 01:21:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:14.568 01:21:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:14.568 01:21:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:14.568 01:21:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:14.568 01:21:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:14.568 01:21:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:14.568 01:21:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:14.568 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:14.568 fio-3.35 00:08:14.568 Starting 1 thread 00:08:19.852 00:08:19.852 test: (groupid=0, jobs=1): err= 0: pid=64593: Sat Sep 28 01:21:15 2024 00:08:19.852 read: IOPS=17.5k, BW=68.3MiB/s (71.6MB/s)(137MiB/2001msec) 00:08:19.852 slat (nsec): min=3349, max=79723, avg=5632.27, stdev=3017.34 00:08:19.852 clat (usec): min=342, max=9403, avg=3645.07, stdev=1350.40 00:08:19.852 lat (usec): min=347, max=9410, avg=3650.70, stdev=1351.59 00:08:19.852 clat percentiles (usec): 00:08:19.852 | 1.00th=[ 1991], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2507], 00:08:19.852 | 30.00th=[ 2671], 40.00th=[ 2868], 50.00th=[ 3097], 60.00th=[ 3523], 00:08:19.852 | 70.00th=[ 4293], 80.00th=[ 4948], 90.00th=[ 5669], 95.00th=[ 6259], 00:08:19.852 | 99.00th=[ 7373], 99.50th=[ 7898], 99.90th=[ 8717], 99.95th=[ 9110], 00:08:19.852 | 99.99th=[ 9372] 00:08:19.852 bw ( KiB/s): min=67368, max=72904, per=100.00%, avg=70333.33, stdev=2789.02, samples=3 00:08:19.852 iops : min=16842, max=18226, avg=17583.33, stdev=697.26, samples=3 00:08:19.852 write: IOPS=17.5k, BW=68.4MiB/s (71.7MB/s)(137MiB/2001msec); 0 zone resets 00:08:19.852 slat (nsec): min=3536, max=50089, avg=5779.22, stdev=2941.47 00:08:19.852 clat (usec): min=352, max=9360, avg=3643.96, stdev=1345.20 00:08:19.852 lat (usec): min=357, max=9365, avg=3649.74, stdev=1346.36 00:08:19.852 clat percentiles (usec): 00:08:19.852 | 1.00th=[ 2008], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2540], 00:08:19.852 | 30.00th=[ 2671], 40.00th=[ 2868], 50.00th=[ 3097], 60.00th=[ 3523], 00:08:19.852 | 70.00th=[ 4293], 80.00th=[ 4948], 90.00th=[ 5669], 95.00th=[ 6259], 00:08:19.852 | 99.00th=[ 7373], 99.50th=[ 7832], 99.90th=[ 8455], 99.95th=[ 8717], 00:08:19.852 | 99.99th=[ 9241] 00:08:19.852 bw ( KiB/s): min=67312, max=72896, per=100.00%, avg=70317.33, stdev=2816.34, samples=3 00:08:19.852 iops : min=16828, max=18224, avg=17579.33, stdev=704.09, samples=3 00:08:19.852 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:08:19.852 lat (msec) : 2=0.95%, 4=65.50%, 10=33.50% 00:08:19.852 cpu : usr=98.80%, sys=0.15%, ctx=3, majf=0, minf=607 00:08:19.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:19.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:19.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:19.852 issued rwts: total=34998,35027,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:19.852 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:19.852 00:08:19.852 Run status group 0 (all jobs): 00:08:19.852 READ: bw=68.3MiB/s (71.6MB/s), 68.3MiB/s-68.3MiB/s (71.6MB/s-71.6MB/s), io=137MiB (143MB), run=2001-2001msec 00:08:19.852 WRITE: bw=68.4MiB/s (71.7MB/s), 68.4MiB/s-68.4MiB/s (71.7MB/s-71.7MB/s), io=137MiB (143MB), run=2001-2001msec 00:08:19.852 ----------------------------------------------------- 00:08:19.852 Suppressions used: 00:08:19.852 count bytes template 00:08:19.852 1 32 /usr/src/fio/parse.c 00:08:19.852 1 8 libtcmalloc_minimal.so 00:08:19.852 ----------------------------------------------------- 00:08:19.852 00:08:19.852 01:21:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:19.852 01:21:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:19.852 01:21:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:19.852 01:21:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:19.852 01:21:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:19.852 01:21:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:19.852 01:21:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:19.852 01:21:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:19.852 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:20.117 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:20.117 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:20.117 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:20.117 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:20.117 01:21:15 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:20.117 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:20.117 fio-3.35 00:08:20.117 Starting 1 thread 00:08:26.703 00:08:26.703 test: (groupid=0, jobs=1): err= 0: pid=64654: Sat Sep 28 01:21:21 2024 00:08:26.703 read: IOPS=17.9k, BW=70.0MiB/s (73.4MB/s)(140MiB/2001msec) 00:08:26.704 slat (usec): min=3, max=1915, avg= 5.73, stdev=10.57 00:08:26.704 clat (usec): min=351, max=11814, avg=3537.73, stdev=1331.64 00:08:26.704 lat (usec): min=355, max=11860, avg=3543.46, stdev=1333.03 00:08:26.704 clat percentiles (usec): 00:08:26.704 | 1.00th=[ 2057], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2507], 00:08:26.704 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2966], 60.00th=[ 3294], 00:08:26.704 | 70.00th=[ 3916], 80.00th=[ 4752], 90.00th=[ 5538], 95.00th=[ 6259], 00:08:26.704 | 99.00th=[ 7439], 99.50th=[ 7963], 99.90th=[ 9503], 99.95th=[10159], 00:08:26.704 | 99.99th=[11731] 00:08:26.704 bw ( KiB/s): min=65376, max=72784, per=98.04%, avg=70312.00, stdev=4274.70, samples=3 00:08:26.704 iops : min=16344, max=18196, avg=17578.00, stdev=1068.68, samples=3 00:08:26.704 write: IOPS=17.9k, BW=70.1MiB/s (73.5MB/s)(140MiB/2001msec); 0 zone resets 00:08:26.704 slat (nsec): min=3449, max=98250, avg=5864.11, stdev=3228.13 00:08:26.704 clat (usec): min=359, max=11757, avg=3573.29, stdev=1343.94 00:08:26.704 lat (usec): min=363, max=11767, avg=3579.15, stdev=1345.32 00:08:26.704 clat percentiles (usec): 00:08:26.704 | 1.00th=[ 2057], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2540], 00:08:26.704 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2999], 60.00th=[ 3359], 00:08:26.704 | 70.00th=[ 4015], 80.00th=[ 4817], 90.00th=[ 5604], 95.00th=[ 6325], 00:08:26.704 | 99.00th=[ 7504], 99.50th=[ 7963], 99.90th=[ 9503], 99.95th=[10421], 00:08:26.704 | 99.99th=[10945] 00:08:26.704 bw ( KiB/s): min=65832, max=72784, per=97.96%, avg=70301.33, stdev=3878.49, samples=3 00:08:26.704 iops : min=16458, max=18196, avg=17575.33, stdev=969.62, samples=3 00:08:26.704 lat (usec) : 500=0.03%, 750=0.02%, 1000=0.02% 00:08:26.704 lat (msec) : 2=0.62%, 4=69.63%, 10=29.64%, 20=0.06% 00:08:26.704 cpu : usr=98.55%, sys=0.25%, ctx=17, majf=0, minf=607 00:08:26.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:26.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:26.704 issued rwts: total=35877,35900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:26.704 00:08:26.704 Run status group 0 (all jobs): 00:08:26.704 READ: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=140MiB (147MB), run=2001-2001msec 00:08:26.704 WRITE: bw=70.1MiB/s (73.5MB/s), 70.1MiB/s-70.1MiB/s (73.5MB/s-73.5MB/s), io=140MiB (147MB), run=2001-2001msec 00:08:26.704 ----------------------------------------------------- 00:08:26.704 Suppressions used: 00:08:26.704 count bytes template 00:08:26.704 1 32 /usr/src/fio/parse.c 00:08:26.704 1 8 libtcmalloc_minimal.so 00:08:26.704 ----------------------------------------------------- 00:08:26.704 00:08:26.704 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:26.704 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:26.704 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:26.704 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:26.704 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:26.704 01:21:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:26.704 01:21:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:26.704 01:21:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:26.704 01:21:22 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:26.704 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:26.704 fio-3.35 00:08:26.704 Starting 1 thread 00:08:36.668 00:08:36.668 test: (groupid=0, jobs=1): err= 0: pid=64715: Sat Sep 28 01:21:31 2024 00:08:36.668 read: IOPS=24.5k, BW=95.6MiB/s (100MB/s)(191MiB/2001msec) 00:08:36.668 slat (nsec): min=3382, max=71520, avg=4947.13, stdev=2093.78 00:08:36.668 clat (usec): min=232, max=9174, avg=2614.68, stdev=737.65 00:08:36.668 lat (usec): min=236, max=9205, avg=2619.63, stdev=738.97 00:08:36.668 clat percentiles (usec): 00:08:36.668 | 1.00th=[ 1483], 5.00th=[ 2089], 10.00th=[ 2278], 20.00th=[ 2343], 00:08:36.668 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:08:36.668 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 3064], 95.00th=[ 4490], 00:08:36.668 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 6718], 99.95th=[ 6849], 00:08:36.668 | 99.99th=[ 8979] 00:08:36.668 bw ( KiB/s): min=95032, max=98880, per=99.72%, avg=97576.00, stdev=2203.40, samples=3 00:08:36.668 iops : min=23760, max=24720, avg=24394.67, stdev=549.70, samples=3 00:08:36.668 write: IOPS=24.3k, BW=94.9MiB/s (99.6MB/s)(190MiB/2001msec); 0 zone resets 00:08:36.668 slat (nsec): min=3482, max=56508, avg=5182.34, stdev=2001.32 00:08:36.668 clat (usec): min=216, max=9088, avg=2616.15, stdev=731.04 00:08:36.668 lat (usec): min=221, max=9096, avg=2621.34, stdev=732.27 00:08:36.668 clat percentiles (usec): 00:08:36.668 | 1.00th=[ 1516], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2343], 00:08:36.668 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:36.668 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 3064], 95.00th=[ 4424], 00:08:36.668 | 99.00th=[ 5735], 99.50th=[ 6325], 99.90th=[ 6652], 99.95th=[ 7046], 00:08:36.668 | 99.99th=[ 8717] 00:08:36.668 bw ( KiB/s): min=95016, max=99648, per=100.00%, avg=97568.00, stdev=2351.80, samples=3 00:08:36.668 iops : min=23754, max=24912, avg=24392.00, stdev=587.95, samples=3 00:08:36.668 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.06% 00:08:36.668 lat (msec) : 2=3.46%, 4=90.02%, 10=6.43% 00:08:36.668 cpu : usr=99.30%, sys=0.05%, ctx=6, majf=0, minf=606 00:08:36.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:36.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:36.668 issued rwts: total=48949,48633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:36.668 00:08:36.668 Run status group 0 (all jobs): 00:08:36.668 READ: bw=95.6MiB/s (100MB/s), 95.6MiB/s-95.6MiB/s (100MB/s-100MB/s), io=191MiB (200MB), run=2001-2001msec 00:08:36.668 WRITE: bw=94.9MiB/s (99.6MB/s), 94.9MiB/s-94.9MiB/s (99.6MB/s-99.6MB/s), io=190MiB (199MB), run=2001-2001msec 00:08:36.668 ----------------------------------------------------- 00:08:36.668 Suppressions used: 00:08:36.668 count bytes template 00:08:36.668 1 32 /usr/src/fio/parse.c 00:08:36.668 1 8 libtcmalloc_minimal.so 00:08:36.668 ----------------------------------------------------- 00:08:36.668 00:08:36.668 01:21:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:36.668 01:21:32 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:36.668 00:08:36.668 real 0m28.909s 00:08:36.668 user 0m18.893s 00:08:36.668 sys 0m16.953s 00:08:36.668 01:21:32 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.668 01:21:32 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:36.668 ************************************ 00:08:36.668 END TEST nvme_fio 00:08:36.668 ************************************ 00:08:36.668 ************************************ 00:08:36.668 END TEST nvme 00:08:36.668 ************************************ 00:08:36.668 00:08:36.668 real 1m37.812s 00:08:36.668 user 3m38.241s 00:08:36.668 sys 0m27.246s 00:08:36.668 01:21:32 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.668 01:21:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.668 01:21:32 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:36.668 01:21:32 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:36.668 01:21:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.668 01:21:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.668 01:21:32 -- common/autotest_common.sh@10 -- # set +x 00:08:36.668 ************************************ 00:08:36.668 START TEST nvme_scc 00:08:36.668 ************************************ 00:08:36.668 01:21:32 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:36.668 * Looking for test storage... 00:08:36.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:36.668 01:21:32 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:36.668 01:21:32 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:36.668 01:21:32 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:36.668 01:21:32 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:36.668 01:21:32 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.668 01:21:32 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.668 01:21:32 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.668 01:21:32 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.668 01:21:32 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.668 01:21:32 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:36.669 01:21:32 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.669 01:21:32 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:36.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.669 --rc genhtml_branch_coverage=1 00:08:36.669 --rc genhtml_function_coverage=1 00:08:36.669 --rc genhtml_legend=1 00:08:36.669 --rc geninfo_all_blocks=1 00:08:36.669 --rc geninfo_unexecuted_blocks=1 00:08:36.669 00:08:36.669 ' 00:08:36.669 01:21:32 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:36.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.669 --rc genhtml_branch_coverage=1 00:08:36.669 --rc genhtml_function_coverage=1 00:08:36.669 --rc genhtml_legend=1 00:08:36.669 --rc geninfo_all_blocks=1 00:08:36.669 --rc geninfo_unexecuted_blocks=1 00:08:36.669 00:08:36.669 ' 00:08:36.669 01:21:32 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:36.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.669 --rc genhtml_branch_coverage=1 00:08:36.669 --rc genhtml_function_coverage=1 00:08:36.669 --rc genhtml_legend=1 00:08:36.669 --rc geninfo_all_blocks=1 00:08:36.669 --rc geninfo_unexecuted_blocks=1 00:08:36.669 00:08:36.669 ' 00:08:36.669 01:21:32 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:36.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.669 --rc genhtml_branch_coverage=1 00:08:36.669 --rc genhtml_function_coverage=1 00:08:36.669 --rc genhtml_legend=1 00:08:36.669 --rc geninfo_all_blocks=1 00:08:36.669 --rc geninfo_unexecuted_blocks=1 00:08:36.669 00:08:36.669 ' 00:08:36.669 01:21:32 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.669 01:21:32 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.669 01:21:32 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.669 01:21:32 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.669 01:21:32 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.669 01:21:32 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:36.669 01:21:32 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:36.669 01:21:32 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:36.669 01:21:32 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.669 01:21:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:36.669 01:21:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:36.669 01:21:32 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:36.669 01:21:32 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:36.669 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.927 Waiting for block devices as requested 00:08:36.927 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.927 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.927 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:37.188 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:42.486 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:42.486 01:21:37 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:42.486 01:21:37 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:42.486 01:21:37 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:42.486 01:21:37 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:42.486 01:21:37 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.486 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:42.487 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:42.487 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:42.487 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:37 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:42.487 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:42.487 01:21:37 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:42.487 01:21:37 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:42.487 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:42.488 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.489 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.490 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:42.491 01:21:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:42.491 01:21:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:42.491 01:21:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:42.491 01:21:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.491 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.492 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:42.493 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.494 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:42.495 01:21:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:42.495 01:21:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:42.495 01:21:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:42.495 01:21:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.495 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:42.496 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:42.497 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.498 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:42.499 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:42.500 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.501 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:42.502 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:42.503 01:21:38 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:42.503 01:21:38 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:42.503 01:21:38 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:42.503 01:21:38 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:42.503 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.504 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:42.505 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:42.506 01:21:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:42.506 01:21:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:42.507 01:21:38 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:42.507 01:21:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:42.507 01:21:38 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:42.507 01:21:38 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:42.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.390 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.390 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.390 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.390 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.390 01:21:39 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:43.390 01:21:39 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:43.390 01:21:39 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.390 01:21:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:43.390 ************************************ 00:08:43.390 START TEST nvme_simple_copy 00:08:43.390 ************************************ 00:08:43.391 01:21:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:43.648 Initializing NVMe Controllers 00:08:43.648 Attaching to 0000:00:10.0 00:08:43.648 Controller supports SCC. Attached to 0000:00:10.0 00:08:43.648 Namespace ID: 1 size: 6GB 00:08:43.648 Initialization complete. 00:08:43.648 00:08:43.648 Controller QEMU NVMe Ctrl (12340 ) 00:08:43.648 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:43.648 Namespace Block Size:4096 00:08:43.648 Writing LBAs 0 to 63 with Random Data 00:08:43.648 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:43.648 LBAs matching Written Data: 64 00:08:43.648 00:08:43.648 real 0m0.251s 00:08:43.648 user 0m0.089s 00:08:43.648 sys 0m0.062s 00:08:43.649 01:21:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.649 01:21:39 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:43.649 ************************************ 00:08:43.649 END TEST nvme_simple_copy 00:08:43.649 ************************************ 00:08:43.649 00:08:43.649 real 0m7.441s 00:08:43.649 user 0m0.970s 00:08:43.649 sys 0m1.372s 00:08:43.649 01:21:39 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.649 01:21:39 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:43.649 ************************************ 00:08:43.649 END TEST nvme_scc 00:08:43.649 ************************************ 00:08:43.649 01:21:39 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:43.649 01:21:39 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:43.649 01:21:39 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:43.649 01:21:39 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:43.649 01:21:39 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:43.649 01:21:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.649 01:21:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.649 01:21:39 -- common/autotest_common.sh@10 -- # set +x 00:08:43.907 ************************************ 00:08:43.907 START TEST nvme_fdp 00:08:43.907 ************************************ 00:08:43.907 01:21:39 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:08:43.907 * Looking for test storage... 00:08:43.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:43.907 01:21:39 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:43.907 01:21:39 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:43.907 01:21:39 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:43.907 01:21:39 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:43.907 01:21:39 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.907 01:21:39 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:43.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.907 --rc genhtml_branch_coverage=1 00:08:43.907 --rc genhtml_function_coverage=1 00:08:43.907 --rc genhtml_legend=1 00:08:43.907 --rc geninfo_all_blocks=1 00:08:43.907 --rc geninfo_unexecuted_blocks=1 00:08:43.907 00:08:43.907 ' 00:08:43.907 01:21:39 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:43.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.907 --rc genhtml_branch_coverage=1 00:08:43.907 --rc genhtml_function_coverage=1 00:08:43.907 --rc genhtml_legend=1 00:08:43.907 --rc geninfo_all_blocks=1 00:08:43.907 --rc geninfo_unexecuted_blocks=1 00:08:43.907 00:08:43.907 ' 00:08:43.907 01:21:39 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:43.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.907 --rc genhtml_branch_coverage=1 00:08:43.907 --rc genhtml_function_coverage=1 00:08:43.907 --rc genhtml_legend=1 00:08:43.907 --rc geninfo_all_blocks=1 00:08:43.907 --rc geninfo_unexecuted_blocks=1 00:08:43.907 00:08:43.907 ' 00:08:43.907 01:21:39 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:43.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.907 --rc genhtml_branch_coverage=1 00:08:43.907 --rc genhtml_function_coverage=1 00:08:43.907 --rc genhtml_legend=1 00:08:43.907 --rc geninfo_all_blocks=1 00:08:43.907 --rc geninfo_unexecuted_blocks=1 00:08:43.907 00:08:43.907 ' 00:08:43.907 01:21:39 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:43.907 01:21:39 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:43.907 01:21:39 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:43.907 01:21:39 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:43.907 01:21:39 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.907 01:21:39 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.907 01:21:39 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.907 01:21:39 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.907 01:21:39 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.907 01:21:39 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:43.907 01:21:39 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.908 01:21:39 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:43.908 01:21:39 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:43.908 01:21:39 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:43.908 01:21:39 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:43.908 01:21:39 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:43.908 01:21:39 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:43.908 01:21:39 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:43.908 01:21:39 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:43.908 01:21:39 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:43.908 01:21:39 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.908 01:21:39 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:44.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:44.425 Waiting for block devices as requested 00:08:44.425 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.425 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.425 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:44.683 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:50.004 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:50.004 01:21:45 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:50.004 01:21:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:50.004 01:21:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:50.004 01:21:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:50.004 01:21:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.004 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:50.005 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:50.006 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.007 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:50.008 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:50.009 01:21:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:50.009 01:21:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:50.009 01:21:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:50.009 01:21:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.009 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:50.010 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.011 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.012 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:50.013 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:50.014 01:21:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:50.014 01:21:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:50.014 01:21:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:50.014 01:21:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:50.014 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:50.015 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:50.016 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.017 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:50.018 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:50.019 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:50.020 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:50.021 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:50.022 01:21:45 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:50.022 01:21:45 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:50.022 01:21:45 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:50.022 01:21:45 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.022 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.023 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:50.024 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:50.025 01:21:45 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:08:50.025 01:21:45 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:08:50.025 01:21:45 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:08:50.025 01:21:45 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:08:50.025 01:21:45 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:50.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:50.847 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.847 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.847 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.847 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.847 01:21:46 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:50.847 01:21:46 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:50.847 01:21:46 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.847 01:21:46 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:50.847 ************************************ 00:08:50.847 START TEST nvme_flexible_data_placement 00:08:50.847 ************************************ 00:08:50.847 01:21:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:51.105 Initializing NVMe Controllers 00:08:51.105 Attaching to 0000:00:13.0 00:08:51.105 Controller supports FDP Attached to 0000:00:13.0 00:08:51.105 Namespace ID: 1 Endurance Group ID: 1 00:08:51.105 Initialization complete. 00:08:51.105 00:08:51.105 ================================== 00:08:51.105 == FDP tests for Namespace: #01 == 00:08:51.105 ================================== 00:08:51.105 00:08:51.105 Get Feature: FDP: 00:08:51.105 ================= 00:08:51.105 Enabled: Yes 00:08:51.105 FDP configuration Index: 0 00:08:51.105 00:08:51.105 FDP configurations log page 00:08:51.105 =========================== 00:08:51.105 Number of FDP configurations: 1 00:08:51.105 Version: 0 00:08:51.105 Size: 112 00:08:51.105 FDP Configuration Descriptor: 0 00:08:51.105 Descriptor Size: 96 00:08:51.105 Reclaim Group Identifier format: 2 00:08:51.105 FDP Volatile Write Cache: Not Present 00:08:51.105 FDP Configuration: Valid 00:08:51.105 Vendor Specific Size: 0 00:08:51.105 Number of Reclaim Groups: 2 00:08:51.105 Number of Recalim Unit Handles: 8 00:08:51.105 Max Placement Identifiers: 128 00:08:51.105 Number of Namespaces Suppprted: 256 00:08:51.105 Reclaim unit Nominal Size: 6000000 bytes 00:08:51.105 Estimated Reclaim Unit Time Limit: Not Reported 00:08:51.105 RUH Desc #000: RUH Type: Initially Isolated 00:08:51.105 RUH Desc #001: RUH Type: Initially Isolated 00:08:51.105 RUH Desc #002: RUH Type: Initially Isolated 00:08:51.106 RUH Desc #003: RUH Type: Initially Isolated 00:08:51.106 RUH Desc #004: RUH Type: Initially Isolated 00:08:51.106 RUH Desc #005: RUH Type: Initially Isolated 00:08:51.106 RUH Desc #006: RUH Type: Initially Isolated 00:08:51.106 RUH Desc #007: RUH Type: Initially Isolated 00:08:51.106 00:08:51.106 FDP reclaim unit handle usage log page 00:08:51.106 ====================================== 00:08:51.106 Number of Reclaim Unit Handles: 8 00:08:51.106 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:51.106 RUH Usage Desc #001: RUH Attributes: Unused 00:08:51.106 RUH Usage Desc #002: RUH Attributes: Unused 00:08:51.106 RUH Usage Desc #003: RUH Attributes: Unused 00:08:51.106 RUH Usage Desc #004: RUH Attributes: Unused 00:08:51.106 RUH Usage Desc #005: RUH Attributes: Unused 00:08:51.106 RUH Usage Desc #006: RUH Attributes: Unused 00:08:51.106 RUH Usage Desc #007: RUH Attributes: Unused 00:08:51.106 00:08:51.106 FDP statistics log page 00:08:51.106 ======================= 00:08:51.106 Host bytes with metadata written: 1087586304 00:08:51.106 Media bytes with metadata written: 1087680512 00:08:51.106 Media bytes erased: 0 00:08:51.106 00:08:51.106 FDP Reclaim unit handle status 00:08:51.106 ============================== 00:08:51.106 Number of RUHS descriptors: 2 00:08:51.106 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000012cc 00:08:51.106 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:08:51.106 00:08:51.106 FDP write on placement id: 0 success 00:08:51.106 00:08:51.106 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:08:51.106 00:08:51.106 IO mgmt send: RUH update for Placement ID: #0 Success 00:08:51.106 00:08:51.106 Get Feature: FDP Events for Placement handle: #0 00:08:51.106 ======================== 00:08:51.106 Number of FDP Events: 6 00:08:51.106 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:08:51.106 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:08:51.106 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:08:51.106 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:08:51.106 FDP Event: #4 Type: Media Reallocated Enabled: No 00:08:51.106 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:08:51.106 00:08:51.106 FDP events log page 00:08:51.106 =================== 00:08:51.106 Number of FDP events: 1 00:08:51.106 FDP Event #0: 00:08:51.106 Event Type: RU Not Written to Capacity 00:08:51.106 Placement Identifier: Valid 00:08:51.106 NSID: Valid 00:08:51.106 Location: Valid 00:08:51.106 Placement Identifier: 0 00:08:51.106 Event Timestamp: 5 00:08:51.106 Namespace Identifier: 1 00:08:51.106 Reclaim Group Identifier: 0 00:08:51.106 Reclaim Unit Handle Identifier: 0 00:08:51.106 00:08:51.106 FDP test passed 00:08:51.106 00:08:51.106 real 0m0.212s 00:08:51.106 user 0m0.056s 00:08:51.106 sys 0m0.055s 00:08:51.106 01:21:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.106 01:21:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:08:51.106 ************************************ 00:08:51.106 END TEST nvme_flexible_data_placement 00:08:51.106 ************************************ 00:08:51.106 00:08:51.106 real 0m7.314s 00:08:51.106 user 0m0.953s 00:08:51.106 sys 0m1.297s 00:08:51.106 01:21:46 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.106 01:21:46 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:51.106 ************************************ 00:08:51.106 END TEST nvme_fdp 00:08:51.106 ************************************ 00:08:51.106 01:21:46 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:08:51.106 01:21:46 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:51.106 01:21:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.106 01:21:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.106 01:21:46 -- common/autotest_common.sh@10 -- # set +x 00:08:51.106 ************************************ 00:08:51.106 START TEST nvme_rpc 00:08:51.106 ************************************ 00:08:51.106 01:21:46 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:51.106 * Looking for test storage... 00:08:51.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:51.106 01:21:47 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:51.106 01:21:47 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:51.106 01:21:47 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.364 01:21:47 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:51.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.364 --rc genhtml_branch_coverage=1 00:08:51.364 --rc genhtml_function_coverage=1 00:08:51.364 --rc genhtml_legend=1 00:08:51.364 --rc geninfo_all_blocks=1 00:08:51.364 --rc geninfo_unexecuted_blocks=1 00:08:51.364 00:08:51.364 ' 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:51.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.364 --rc genhtml_branch_coverage=1 00:08:51.364 --rc genhtml_function_coverage=1 00:08:51.364 --rc genhtml_legend=1 00:08:51.364 --rc geninfo_all_blocks=1 00:08:51.364 --rc geninfo_unexecuted_blocks=1 00:08:51.364 00:08:51.364 ' 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:51.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.364 --rc genhtml_branch_coverage=1 00:08:51.364 --rc genhtml_function_coverage=1 00:08:51.364 --rc genhtml_legend=1 00:08:51.364 --rc geninfo_all_blocks=1 00:08:51.364 --rc geninfo_unexecuted_blocks=1 00:08:51.364 00:08:51.364 ' 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:51.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.364 --rc genhtml_branch_coverage=1 00:08:51.364 --rc genhtml_function_coverage=1 00:08:51.364 --rc genhtml_legend=1 00:08:51.364 --rc geninfo_all_blocks=1 00:08:51.364 --rc geninfo_unexecuted_blocks=1 00:08:51.364 00:08:51.364 ' 00:08:51.364 01:21:47 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.364 01:21:47 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:51.364 01:21:47 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:08:51.364 01:21:47 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66070 00:08:51.364 01:21:47 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:08:51.364 01:21:47 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66070 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 66070 ']' 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.364 01:21:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.364 01:21:47 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:51.364 [2024-09-28 01:21:47.204924] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:51.364 [2024-09-28 01:21:47.205049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66070 ] 00:08:51.622 [2024-09-28 01:21:47.354333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.622 [2024-09-28 01:21:47.530024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.622 [2024-09-28 01:21:47.530127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.188 01:21:48 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.188 01:21:48 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:52.188 01:21:48 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:08:52.446 Nvme0n1 00:08:52.447 01:21:48 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:08:52.447 01:21:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:08:52.705 request: 00:08:52.705 { 00:08:52.705 "bdev_name": "Nvme0n1", 00:08:52.705 "filename": "non_existing_file", 00:08:52.705 "method": "bdev_nvme_apply_firmware", 00:08:52.705 "req_id": 1 00:08:52.705 } 00:08:52.705 Got JSON-RPC error response 00:08:52.705 response: 00:08:52.705 { 00:08:52.705 "code": -32603, 00:08:52.705 "message": "open file failed." 00:08:52.705 } 00:08:52.705 01:21:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:08:52.705 01:21:48 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:08:52.705 01:21:48 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:08:52.962 01:21:48 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:52.962 01:21:48 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66070 00:08:52.962 01:21:48 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 66070 ']' 00:08:52.962 01:21:48 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 66070 00:08:52.962 01:21:48 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:08:52.962 01:21:48 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.962 01:21:48 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66070 00:08:52.962 01:21:48 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:52.962 01:21:48 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:52.962 killing process with pid 66070 00:08:52.962 01:21:48 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66070' 00:08:52.962 01:21:48 nvme_rpc -- common/autotest_common.sh@969 -- # kill 66070 00:08:52.962 01:21:48 nvme_rpc -- common/autotest_common.sh@974 -- # wait 66070 00:08:54.863 00:08:54.863 real 0m3.403s 00:08:54.863 user 0m6.314s 00:08:54.863 sys 0m0.503s 00:08:54.864 01:21:50 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.864 01:21:50 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.864 ************************************ 00:08:54.864 END TEST nvme_rpc 00:08:54.864 ************************************ 00:08:54.864 01:21:50 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:54.864 01:21:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.864 01:21:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.864 01:21:50 -- common/autotest_common.sh@10 -- # set +x 00:08:54.864 ************************************ 00:08:54.864 START TEST nvme_rpc_timeouts 00:08:54.864 ************************************ 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:54.864 * Looking for test storage... 00:08:54.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.864 01:21:50 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:54.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.864 --rc genhtml_branch_coverage=1 00:08:54.864 --rc genhtml_function_coverage=1 00:08:54.864 --rc genhtml_legend=1 00:08:54.864 --rc geninfo_all_blocks=1 00:08:54.864 --rc geninfo_unexecuted_blocks=1 00:08:54.864 00:08:54.864 ' 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:54.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.864 --rc genhtml_branch_coverage=1 00:08:54.864 --rc genhtml_function_coverage=1 00:08:54.864 --rc genhtml_legend=1 00:08:54.864 --rc geninfo_all_blocks=1 00:08:54.864 --rc geninfo_unexecuted_blocks=1 00:08:54.864 00:08:54.864 ' 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:54.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.864 --rc genhtml_branch_coverage=1 00:08:54.864 --rc genhtml_function_coverage=1 00:08:54.864 --rc genhtml_legend=1 00:08:54.864 --rc geninfo_all_blocks=1 00:08:54.864 --rc geninfo_unexecuted_blocks=1 00:08:54.864 00:08:54.864 ' 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:54.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.864 --rc genhtml_branch_coverage=1 00:08:54.864 --rc genhtml_function_coverage=1 00:08:54.864 --rc genhtml_legend=1 00:08:54.864 --rc geninfo_all_blocks=1 00:08:54.864 --rc geninfo_unexecuted_blocks=1 00:08:54.864 00:08:54.864 ' 00:08:54.864 01:21:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:54.864 01:21:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66135 00:08:54.864 01:21:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66135 00:08:54.864 01:21:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66167 00:08:54.864 01:21:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:08:54.864 01:21:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66167 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 66167 ']' 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.864 01:21:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:54.864 01:21:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:54.864 [2024-09-28 01:21:50.579794] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:54.864 [2024-09-28 01:21:50.579889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66167 ] 00:08:54.864 [2024-09-28 01:21:50.722798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.123 [2024-09-28 01:21:50.905687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.123 [2024-09-28 01:21:50.905770] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.693 01:21:51 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.693 Checking default timeout settings: 00:08:55.693 01:21:51 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:08:55.693 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:08:55.693 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:08:55.953 Making settings changes with rpc: 00:08:55.953 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:08:55.953 01:21:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:08:56.214 Check default vs. modified settings: 00:08:56.214 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:08:56.214 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66135 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66135 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:08:56.473 Setting action_on_timeout is changed as expected. 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66135 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66135 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:08:56.473 Setting timeout_us is changed as expected. 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66135 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66135 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:08:56.473 Setting timeout_admin_us is changed as expected. 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66135 /tmp/settings_modified_66135 00:08:56.473 01:21:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66167 00:08:56.473 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 66167 ']' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 66167 00:08:56.473 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:08:56.473 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66167 00:08:56.473 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.473 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.473 killing process with pid 66167 00:08:56.473 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66167' 00:08:56.473 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 66167 00:08:56.473 01:21:52 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 66167 00:08:57.856 RPC TIMEOUT SETTING TEST PASSED. 00:08:57.856 01:21:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:08:57.856 00:08:57.856 real 0m3.355s 00:08:57.856 user 0m6.417s 00:08:57.856 sys 0m0.475s 00:08:57.856 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.856 01:21:53 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:57.856 ************************************ 00:08:57.856 END TEST nvme_rpc_timeouts 00:08:57.856 ************************************ 00:08:57.856 01:21:53 -- spdk/autotest.sh@239 -- # uname -s 00:08:57.856 01:21:53 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:08:57.856 01:21:53 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:08:57.856 01:21:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.856 01:21:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.856 01:21:53 -- common/autotest_common.sh@10 -- # set +x 00:08:57.856 ************************************ 00:08:57.856 START TEST sw_hotplug 00:08:57.856 ************************************ 00:08:57.856 01:21:53 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:08:58.114 * Looking for test storage... 00:08:58.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:58.114 01:21:53 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:58.114 01:21:53 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:58.114 01:21:53 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:08:58.114 01:21:53 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.114 01:21:53 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.115 01:21:53 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:08:58.115 01:21:53 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.115 01:21:53 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:58.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.115 --rc genhtml_branch_coverage=1 00:08:58.115 --rc genhtml_function_coverage=1 00:08:58.115 --rc genhtml_legend=1 00:08:58.115 --rc geninfo_all_blocks=1 00:08:58.115 --rc geninfo_unexecuted_blocks=1 00:08:58.115 00:08:58.115 ' 00:08:58.115 01:21:53 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:58.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.115 --rc genhtml_branch_coverage=1 00:08:58.115 --rc genhtml_function_coverage=1 00:08:58.115 --rc genhtml_legend=1 00:08:58.115 --rc geninfo_all_blocks=1 00:08:58.115 --rc geninfo_unexecuted_blocks=1 00:08:58.115 00:08:58.115 ' 00:08:58.115 01:21:53 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:58.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.115 --rc genhtml_branch_coverage=1 00:08:58.115 --rc genhtml_function_coverage=1 00:08:58.115 --rc genhtml_legend=1 00:08:58.115 --rc geninfo_all_blocks=1 00:08:58.115 --rc geninfo_unexecuted_blocks=1 00:08:58.115 00:08:58.115 ' 00:08:58.115 01:21:53 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:58.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.115 --rc genhtml_branch_coverage=1 00:08:58.115 --rc genhtml_function_coverage=1 00:08:58.115 --rc genhtml_legend=1 00:08:58.115 --rc geninfo_all_blocks=1 00:08:58.115 --rc geninfo_unexecuted_blocks=1 00:08:58.115 00:08:58.115 ' 00:08:58.115 01:21:53 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:58.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.373 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:58.373 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:58.373 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:58.373 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:58.632 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:08:58.632 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:08:58.632 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:08:58.632 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@233 -- # local class 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:58.632 01:21:54 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:08:58.633 01:21:54 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:58.633 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:08:58.633 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:08:58.633 01:21:54 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:58.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.891 Waiting for block devices as requested 00:08:59.149 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.149 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.149 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.407 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:04.670 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:04.670 01:22:00 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:04.670 01:22:00 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:04.670 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:04.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:04.670 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:04.928 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:05.186 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.186 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:05.186 01:22:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67019 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:05.186 01:22:01 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:05.186 01:22:01 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:05.186 01:22:01 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:05.186 01:22:01 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:05.186 01:22:01 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:05.186 01:22:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:05.445 Initializing NVMe Controllers 00:09:05.445 Attaching to 0000:00:10.0 00:09:05.445 Attaching to 0000:00:11.0 00:09:05.445 Attached to 0000:00:10.0 00:09:05.445 Attached to 0000:00:11.0 00:09:05.445 Initialization complete. Starting I/O... 00:09:05.445 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:05.445 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:05.445 00:09:06.378 QEMU NVMe Ctrl (12340 ): 2559 I/Os completed (+2559) 00:09:06.378 QEMU NVMe Ctrl (12341 ): 2544 I/Os completed (+2544) 00:09:06.378 00:09:07.751 QEMU NVMe Ctrl (12340 ): 6121 I/Os completed (+3562) 00:09:07.751 QEMU NVMe Ctrl (12341 ): 5965 I/Os completed (+3421) 00:09:07.751 00:09:08.685 QEMU NVMe Ctrl (12340 ): 9598 I/Os completed (+3477) 00:09:08.685 QEMU NVMe Ctrl (12341 ): 9398 I/Os completed (+3433) 00:09:08.685 00:09:09.618 QEMU NVMe Ctrl (12340 ): 13179 I/Os completed (+3581) 00:09:09.618 QEMU NVMe Ctrl (12341 ): 13007 I/Os completed (+3609) 00:09:09.618 00:09:10.552 QEMU NVMe Ctrl (12340 ): 16549 I/Os completed (+3370) 00:09:10.552 QEMU NVMe Ctrl (12341 ): 16531 I/Os completed (+3524) 00:09:10.552 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:11.487 [2024-09-28 01:22:07.118016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:11.487 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:11.487 [2024-09-28 01:22:07.118958] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.119003] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.119018] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.119033] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:11.487 [2024-09-28 01:22:07.120587] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.120628] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.120639] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.120650] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:11.487 [2024-09-28 01:22:07.136954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:11.487 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:11.487 [2024-09-28 01:22:07.137795] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.137826] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.137846] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.137858] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:11.487 [2024-09-28 01:22:07.139150] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.139181] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.139217] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 [2024-09-28 01:22:07.139231] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:11.487 Attaching to 0000:00:10.0 00:09:11.487 Attached to 0000:00:10.0 00:09:11.487 QEMU NVMe Ctrl (12340 ): 4 I/Os completed (+4) 00:09:11.487 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:11.487 01:22:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:11.487 Attaching to 0000:00:11.0 00:09:11.487 Attached to 0000:00:11.0 00:09:12.420 QEMU NVMe Ctrl (12340 ): 3731 I/Os completed (+3727) 00:09:12.420 QEMU NVMe Ctrl (12341 ): 3436 I/Os completed (+3436) 00:09:12.420 00:09:13.366 QEMU NVMe Ctrl (12340 ): 7437 I/Os completed (+3706) 00:09:13.366 QEMU NVMe Ctrl (12341 ): 7114 I/Os completed (+3678) 00:09:13.367 00:09:14.740 QEMU NVMe Ctrl (12340 ): 11151 I/Os completed (+3714) 00:09:14.740 QEMU NVMe Ctrl (12341 ): 10841 I/Os completed (+3727) 00:09:14.740 00:09:15.674 QEMU NVMe Ctrl (12340 ): 14863 I/Os completed (+3712) 00:09:15.674 QEMU NVMe Ctrl (12341 ): 14541 I/Os completed (+3700) 00:09:15.674 00:09:16.607 QEMU NVMe Ctrl (12340 ): 18581 I/Os completed (+3718) 00:09:16.607 QEMU NVMe Ctrl (12341 ): 18261 I/Os completed (+3720) 00:09:16.607 00:09:17.540 QEMU NVMe Ctrl (12340 ): 22298 I/Os completed (+3717) 00:09:17.540 QEMU NVMe Ctrl (12341 ): 21973 I/Os completed (+3712) 00:09:17.540 00:09:18.473 QEMU NVMe Ctrl (12340 ): 26001 I/Os completed (+3703) 00:09:18.473 QEMU NVMe Ctrl (12341 ): 25686 I/Os completed (+3713) 00:09:18.473 00:09:19.408 QEMU NVMe Ctrl (12340 ): 29661 I/Os completed (+3660) 00:09:19.408 QEMU NVMe Ctrl (12341 ): 29289 I/Os completed (+3603) 00:09:19.408 00:09:20.783 QEMU NVMe Ctrl (12340 ): 33287 I/Os completed (+3626) 00:09:20.783 QEMU NVMe Ctrl (12341 ): 32808 I/Os completed (+3519) 00:09:20.783 00:09:21.721 QEMU NVMe Ctrl (12340 ): 36953 I/Os completed (+3666) 00:09:21.721 QEMU NVMe Ctrl (12341 ): 36546 I/Os completed (+3738) 00:09:21.721 00:09:22.661 QEMU NVMe Ctrl (12340 ): 40088 I/Os completed (+3135) 00:09:22.661 QEMU NVMe Ctrl (12341 ): 39690 I/Os completed (+3144) 00:09:22.661 00:09:23.601 QEMU NVMe Ctrl (12340 ): 43366 I/Os completed (+3278) 00:09:23.601 QEMU NVMe Ctrl (12341 ): 42921 I/Os completed (+3231) 00:09:23.601 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:23.601 [2024-09-28 01:22:19.371887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:23.601 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:23.601 [2024-09-28 01:22:19.373045] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.373099] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.373116] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.373134] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:23.601 [2024-09-28 01:22:19.375099] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.375146] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.375160] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.375176] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:10.0/subsystem_device 00:09:23.601 EAL: Scan for (pci) bus failed. 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:23.601 [2024-09-28 01:22:19.390283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:23.601 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:23.601 [2024-09-28 01:22:19.391326] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.391363] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.391385] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.391401] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:23.601 [2024-09-28 01:22:19.393020] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.393057] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.393072] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 [2024-09-28 01:22:19.393087] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:23.601 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:23.601 EAL: Scan for (pci) bus failed. 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:23.601 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:23.862 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:23.862 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:23.862 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:23.862 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:23.862 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:23.862 Attaching to 0000:00:10.0 00:09:23.862 Attached to 0000:00:10.0 00:09:23.862 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:23.862 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:23.862 01:22:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:23.862 Attaching to 0000:00:11.0 00:09:23.862 Attached to 0000:00:11.0 00:09:24.434 QEMU NVMe Ctrl (12340 ): 2363 I/Os completed (+2363) 00:09:24.434 QEMU NVMe Ctrl (12341 ): 2068 I/Os completed (+2068) 00:09:24.434 00:09:25.370 QEMU NVMe Ctrl (12340 ): 5932 I/Os completed (+3569) 00:09:25.370 QEMU NVMe Ctrl (12341 ): 5640 I/Os completed (+3572) 00:09:25.370 00:09:26.744 QEMU NVMe Ctrl (12340 ): 9636 I/Os completed (+3704) 00:09:26.744 QEMU NVMe Ctrl (12341 ): 9348 I/Os completed (+3708) 00:09:26.744 00:09:27.683 QEMU NVMe Ctrl (12340 ): 13301 I/Os completed (+3665) 00:09:27.683 QEMU NVMe Ctrl (12341 ): 13012 I/Os completed (+3664) 00:09:27.683 00:09:28.624 QEMU NVMe Ctrl (12340 ): 16446 I/Os completed (+3145) 00:09:28.624 QEMU NVMe Ctrl (12341 ): 16179 I/Os completed (+3167) 00:09:28.624 00:09:29.565 QEMU NVMe Ctrl (12340 ): 19685 I/Os completed (+3239) 00:09:29.565 QEMU NVMe Ctrl (12341 ): 19425 I/Os completed (+3246) 00:09:29.565 00:09:30.504 QEMU NVMe Ctrl (12340 ): 23364 I/Os completed (+3679) 00:09:30.504 QEMU NVMe Ctrl (12341 ): 23096 I/Os completed (+3671) 00:09:30.504 00:09:31.437 QEMU NVMe Ctrl (12340 ): 27034 I/Os completed (+3670) 00:09:31.437 QEMU NVMe Ctrl (12341 ): 26771 I/Os completed (+3675) 00:09:31.437 00:09:32.371 QEMU NVMe Ctrl (12340 ): 30574 I/Os completed (+3540) 00:09:32.371 QEMU NVMe Ctrl (12341 ): 30244 I/Os completed (+3473) 00:09:32.371 00:09:33.749 QEMU NVMe Ctrl (12340 ): 33814 I/Os completed (+3240) 00:09:33.749 QEMU NVMe Ctrl (12341 ): 33499 I/Os completed (+3255) 00:09:33.749 00:09:34.682 QEMU NVMe Ctrl (12340 ): 37476 I/Os completed (+3662) 00:09:34.682 QEMU NVMe Ctrl (12341 ): 37161 I/Os completed (+3662) 00:09:34.682 00:09:35.613 QEMU NVMe Ctrl (12340 ): 41168 I/Os completed (+3692) 00:09:35.613 QEMU NVMe Ctrl (12341 ): 40837 I/Os completed (+3676) 00:09:35.613 00:09:35.870 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:35.870 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:35.870 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:35.870 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:35.870 [2024-09-28 01:22:31.658904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:35.870 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:35.870 [2024-09-28 01:22:31.659847] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.659891] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.659905] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.659920] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:35.870 [2024-09-28 01:22:31.661506] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.661543] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.661555] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.661567] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:35.870 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:35.870 [2024-09-28 01:22:31.679809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:35.870 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:35.870 [2024-09-28 01:22:31.680669] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.680703] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.680718] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.680730] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:35.870 [2024-09-28 01:22:31.682067] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.682100] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.682114] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 [2024-09-28 01:22:31.682124] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.870 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:35.870 EAL: Scan for (pci) bus failed. 00:09:35.870 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:35.870 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:35.870 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:35.871 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:35.871 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:36.131 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:36.131 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:36.131 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:36.131 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:36.131 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:36.131 Attaching to 0000:00:10.0 00:09:36.131 Attached to 0000:00:10.0 00:09:36.131 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:36.131 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:36.131 01:22:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:36.131 Attaching to 0000:00:11.0 00:09:36.131 Attached to 0000:00:11.0 00:09:36.131 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:36.131 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:36.131 [2024-09-28 01:22:31.937232] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:09:48.363 01:22:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:48.363 01:22:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:48.363 01:22:43 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.82 00:09:48.363 01:22:43 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.82 00:09:48.363 01:22:43 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:09:48.363 01:22:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.82 00:09:48.363 01:22:43 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.82 2 00:09:48.363 remove_attach_helper took 42.82s to complete (handling 2 nvme drive(s)) 01:22:43 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:09:54.947 01:22:49 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67019 00:09:54.947 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67019) - No such process 00:09:54.947 01:22:49 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67019 00:09:54.947 01:22:49 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:09:54.948 01:22:49 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:09:54.948 01:22:49 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:09:54.948 01:22:49 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67568 00:09:54.948 01:22:49 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:09:54.948 01:22:49 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67568 00:09:54.948 01:22:49 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:54.948 01:22:49 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67568 ']' 00:09:54.948 01:22:49 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.948 01:22:49 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.948 01:22:49 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.948 01:22:49 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.948 01:22:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:54.948 [2024-09-28 01:22:50.017527] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:54.948 [2024-09-28 01:22:50.017654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67568 ] 00:09:54.948 [2024-09-28 01:22:50.168973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.948 [2024-09-28 01:22:50.348777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.208 01:22:50 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.208 01:22:50 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:09:55.208 01:22:50 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:09:55.208 01:22:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.208 01:22:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:55.208 01:22:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.208 01:22:50 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:09:55.208 01:22:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:55.208 01:22:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:09:55.208 01:22:50 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:55.208 01:22:50 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:55.208 01:22:50 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:55.208 01:22:50 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:55.208 01:22:50 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:09:55.208 01:22:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:55.208 01:22:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:55.208 01:22:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:09:55.208 01:22:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:55.208 01:22:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:01.769 01:22:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:01.769 01:22:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.769 01:22:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:01.769 01:22:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.769 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:01.769 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:01.769 [2024-09-28 01:22:57.030893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:01.769 [2024-09-28 01:22:57.032141] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.769 [2024-09-28 01:22:57.032179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.769 [2024-09-28 01:22:57.032191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:01.769 [2024-09-28 01:22:57.032218] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.769 [2024-09-28 01:22:57.032226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.769 [2024-09-28 01:22:57.032235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:01.769 [2024-09-28 01:22:57.032243] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.769 [2024-09-28 01:22:57.032250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.769 [2024-09-28 01:22:57.032257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:01.769 [2024-09-28 01:22:57.032268] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.769 [2024-09-28 01:22:57.032275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.769 [2024-09-28 01:22:57.032283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:01.769 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:01.769 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:01.769 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:01.769 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:01.769 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:01.769 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:01.769 01:22:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.769 01:22:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:01.769 [2024-09-28 01:22:57.530893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:01.769 [2024-09-28 01:22:57.532131] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.769 [2024-09-28 01:22:57.532163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.769 [2024-09-28 01:22:57.532175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:01.769 [2024-09-28 01:22:57.532201] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.769 [2024-09-28 01:22:57.532211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.769 [2024-09-28 01:22:57.532219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:01.769 [2024-09-28 01:22:57.532227] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.769 [2024-09-28 01:22:57.532233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.769 [2024-09-28 01:22:57.532241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:01.769 [2024-09-28 01:22:57.532248] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.769 [2024-09-28 01:22:57.532256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:01.769 [2024-09-28 01:22:57.532262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:01.769 01:22:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.769 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:01.769 01:22:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:02.335 01:22:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.335 01:22:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:02.335 01:22:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.335 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:02.593 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:02.593 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:02.593 01:22:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:14.794 01:23:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.794 01:23:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:14.794 01:23:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:14.794 01:23:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.794 01:23:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:14.794 [2024-09-28 01:23:10.431111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:14.794 [2024-09-28 01:23:10.432372] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.794 [2024-09-28 01:23:10.432408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.794 [2024-09-28 01:23:10.432419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.794 [2024-09-28 01:23:10.432435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.794 [2024-09-28 01:23:10.432442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.794 [2024-09-28 01:23:10.432450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.794 [2024-09-28 01:23:10.432457] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.794 [2024-09-28 01:23:10.432465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.794 [2024-09-28 01:23:10.432471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.794 [2024-09-28 01:23:10.432479] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.794 [2024-09-28 01:23:10.432486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.794 [2024-09-28 01:23:10.432493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.794 01:23:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:14.794 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:15.052 [2024-09-28 01:23:10.831112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:15.052 [2024-09-28 01:23:10.832347] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.052 [2024-09-28 01:23:10.832378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.052 [2024-09-28 01:23:10.832391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.052 [2024-09-28 01:23:10.832405] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.052 [2024-09-28 01:23:10.832414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.052 [2024-09-28 01:23:10.832420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.052 [2024-09-28 01:23:10.832429] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.052 [2024-09-28 01:23:10.832436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.052 [2024-09-28 01:23:10.832443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.052 [2024-09-28 01:23:10.832451] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.052 [2024-09-28 01:23:10.832459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.052 [2024-09-28 01:23:10.832465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.052 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:15.052 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:15.052 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:15.052 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:15.052 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:15.052 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:15.052 01:23:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.052 01:23:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:15.052 01:23:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.052 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:15.052 01:23:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.309 01:23:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:27.499 01:23:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.499 01:23:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:27.499 01:23:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:27.499 01:23:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.499 01:23:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:27.499 01:23:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:27.499 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:27.499 [2024-09-28 01:23:23.331335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:27.499 [2024-09-28 01:23:23.332566] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.499 [2024-09-28 01:23:23.332602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.499 [2024-09-28 01:23:23.332613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.500 [2024-09-28 01:23:23.332629] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.500 [2024-09-28 01:23:23.332636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.500 [2024-09-28 01:23:23.332646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.500 [2024-09-28 01:23:23.332654] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.500 [2024-09-28 01:23:23.332661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.500 [2024-09-28 01:23:23.332668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:27.500 [2024-09-28 01:23:23.332677] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.500 [2024-09-28 01:23:23.332684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:27.500 [2024-09-28 01:23:23.332691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.066 [2024-09-28 01:23:23.731334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:28.066 [2024-09-28 01:23:23.732493] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.066 [2024-09-28 01:23:23.732524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.066 [2024-09-28 01:23:23.732536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.066 [2024-09-28 01:23:23.732549] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.066 [2024-09-28 01:23:23.732559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.066 [2024-09-28 01:23:23.732566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.066 [2024-09-28 01:23:23.732575] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.066 [2024-09-28 01:23:23.732581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.066 [2024-09-28 01:23:23.732590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.066 [2024-09-28 01:23:23.732598] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:28.066 [2024-09-28 01:23:23.732605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:28.066 [2024-09-28 01:23:23.732611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:28.066 01:23:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.066 01:23:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:28.066 01:23:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.066 01:23:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:28.325 01:23:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:28.325 01:23:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.325 01:23:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:28.325 01:23:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:28.325 01:23:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:28.325 01:23:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:28.325 01:23:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:28.325 01:23:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.20 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.20 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:10:40.516 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:40.516 01:23:36 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:40.516 01:23:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:47.075 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.075 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.075 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.075 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.075 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.075 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:47.075 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.076 01:23:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.076 01:23:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.076 01:23:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:47.076 [2024-09-28 01:23:42.254819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:47.076 [2024-09-28 01:23:42.255928] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.076 [2024-09-28 01:23:42.255962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.076 [2024-09-28 01:23:42.255974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.076 [2024-09-28 01:23:42.255990] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.076 [2024-09-28 01:23:42.255998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.076 [2024-09-28 01:23:42.256007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.076 [2024-09-28 01:23:42.256014] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.076 [2024-09-28 01:23:42.256022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.076 [2024-09-28 01:23:42.256029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.076 [2024-09-28 01:23:42.256038] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.076 [2024-09-28 01:23:42.256044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.076 [2024-09-28 01:23:42.256054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.076 [2024-09-28 01:23:42.654824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:47.076 [2024-09-28 01:23:42.655722] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.076 [2024-09-28 01:23:42.655751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.076 [2024-09-28 01:23:42.655763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.076 [2024-09-28 01:23:42.655777] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.076 [2024-09-28 01:23:42.655785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.076 [2024-09-28 01:23:42.655792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.076 [2024-09-28 01:23:42.655801] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.076 [2024-09-28 01:23:42.655808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.076 [2024-09-28 01:23:42.655816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.076 [2024-09-28 01:23:42.655823] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.076 [2024-09-28 01:23:42.655831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.076 [2024-09-28 01:23:42.655837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.076 01:23:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.076 01:23:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.076 01:23:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.076 01:23:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:47.076 01:23:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:47.335 01:23:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.335 01:23:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.612 01:23:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.612 01:23:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.612 01:23:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.612 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.612 01:23:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.612 01:23:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.612 01:23:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.613 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:59.613 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:59.613 [2024-09-28 01:23:55.155052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:59.613 [2024-09-28 01:23:55.156100] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.613 [2024-09-28 01:23:55.156137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.613 [2024-09-28 01:23:55.156149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.613 [2024-09-28 01:23:55.156170] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.613 [2024-09-28 01:23:55.156177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.613 [2024-09-28 01:23:55.156186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.613 [2024-09-28 01:23:55.156209] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.613 [2024-09-28 01:23:55.156220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.613 [2024-09-28 01:23:55.156227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.613 [2024-09-28 01:23:55.156236] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.613 [2024-09-28 01:23:55.156243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.613 [2024-09-28 01:23:55.156252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.871 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:59.871 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:59.871 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:59.871 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.871 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.871 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.871 01:23:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.871 01:23:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.871 01:23:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.871 [2024-09-28 01:23:55.655047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:59.871 [2024-09-28 01:23:55.656282] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.871 [2024-09-28 01:23:55.656313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.871 [2024-09-28 01:23:55.656325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.871 [2024-09-28 01:23:55.656339] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.871 [2024-09-28 01:23:55.656349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.871 [2024-09-28 01:23:55.656357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.871 [2024-09-28 01:23:55.656368] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.871 [2024-09-28 01:23:55.656375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.871 [2024-09-28 01:23:55.656384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.871 [2024-09-28 01:23:55.656391] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:59.871 [2024-09-28 01:23:55.656399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:59.871 [2024-09-28 01:23:55.656407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.871 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:59.871 01:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.435 01:23:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.435 01:23:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.435 01:23:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:00.435 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:00.693 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:00.693 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.693 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.693 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:00.693 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:00.693 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:00.693 01:23:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.903 01:24:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.903 01:24:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.903 01:24:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.903 01:24:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.903 01:24:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.903 [2024-09-28 01:24:08.555292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:12.903 [2024-09-28 01:24:08.556516] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.903 [2024-09-28 01:24:08.556551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.903 [2024-09-28 01:24:08.556562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.903 [2024-09-28 01:24:08.556584] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.903 [2024-09-28 01:24:08.556593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.903 [2024-09-28 01:24:08.556602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.903 [2024-09-28 01:24:08.556610] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.903 [2024-09-28 01:24:08.556622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.903 [2024-09-28 01:24:08.556628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.903 [2024-09-28 01:24:08.556637] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.903 [2024-09-28 01:24:08.556644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.903 [2024-09-28 01:24:08.556652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.903 01:24:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:12.903 01:24:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:13.163 [2024-09-28 01:24:09.055287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:13.163 [2024-09-28 01:24:09.056517] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.163 [2024-09-28 01:24:09.056543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.163 [2024-09-28 01:24:09.056554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.163 [2024-09-28 01:24:09.056565] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.163 [2024-09-28 01:24:09.056574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.163 [2024-09-28 01:24:09.056582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.163 [2024-09-28 01:24:09.056591] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.163 [2024-09-28 01:24:09.056598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.163 [2024-09-28 01:24:09.056605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.163 [2024-09-28 01:24:09.056613] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:13.163 [2024-09-28 01:24:09.056623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.163 [2024-09-28 01:24:09.056629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.163 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:13.163 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:13.163 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:13.163 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.163 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.163 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.163 01:24:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.163 01:24:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.423 01:24:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.423 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:13.423 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:13.423 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:13.423 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:13.423 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:13.423 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:13.423 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:13.423 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:13.423 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:13.423 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:13.682 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:13.682 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:13.682 01:24:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.26 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.26 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.26 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.26 2 00:11:25.950 remove_attach_helper took 45.26s to complete (handling 2 nvme drive(s)) 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:25.950 01:24:21 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67568 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67568 ']' 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67568 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67568 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:25.950 killing process with pid 67568 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67568' 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67568 00:11:25.950 01:24:21 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67568 00:11:26.888 01:24:22 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:27.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:27.719 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:27.719 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:27.719 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:27.719 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:27.719 00:11:27.719 real 2m29.833s 00:11:27.719 user 1m51.994s 00:11:27.719 sys 0m16.563s 00:11:27.719 01:24:23 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.719 ************************************ 00:11:27.719 END TEST sw_hotplug 00:11:27.719 ************************************ 00:11:27.719 01:24:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.719 01:24:23 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:27.719 01:24:23 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:27.719 01:24:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:27.719 01:24:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.719 01:24:23 -- common/autotest_common.sh@10 -- # set +x 00:11:27.719 ************************************ 00:11:27.719 START TEST nvme_xnvme 00:11:27.719 ************************************ 00:11:27.719 01:24:23 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:27.979 * Looking for test storage... 00:11:27.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:27.979 01:24:23 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:27.979 01:24:23 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:11:27.979 01:24:23 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:27.979 01:24:23 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.979 01:24:23 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:27.979 01:24:23 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.979 01:24:23 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:27.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.979 --rc genhtml_branch_coverage=1 00:11:27.979 --rc genhtml_function_coverage=1 00:11:27.979 --rc genhtml_legend=1 00:11:27.979 --rc geninfo_all_blocks=1 00:11:27.979 --rc geninfo_unexecuted_blocks=1 00:11:27.979 00:11:27.979 ' 00:11:27.979 01:24:23 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:27.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.979 --rc genhtml_branch_coverage=1 00:11:27.979 --rc genhtml_function_coverage=1 00:11:27.979 --rc genhtml_legend=1 00:11:27.979 --rc geninfo_all_blocks=1 00:11:27.979 --rc geninfo_unexecuted_blocks=1 00:11:27.979 00:11:27.979 ' 00:11:27.979 01:24:23 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:27.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.980 --rc genhtml_branch_coverage=1 00:11:27.980 --rc genhtml_function_coverage=1 00:11:27.980 --rc genhtml_legend=1 00:11:27.980 --rc geninfo_all_blocks=1 00:11:27.980 --rc geninfo_unexecuted_blocks=1 00:11:27.980 00:11:27.980 ' 00:11:27.980 01:24:23 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:27.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.980 --rc genhtml_branch_coverage=1 00:11:27.980 --rc genhtml_function_coverage=1 00:11:27.980 --rc genhtml_legend=1 00:11:27.980 --rc geninfo_all_blocks=1 00:11:27.980 --rc geninfo_unexecuted_blocks=1 00:11:27.980 00:11:27.980 ' 00:11:27.980 01:24:23 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.980 01:24:23 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.980 01:24:23 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.980 01:24:23 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.980 01:24:23 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.980 01:24:23 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.980 01:24:23 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.980 01:24:23 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.980 01:24:23 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:27.980 01:24:23 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.980 01:24:23 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:27.980 01:24:23 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:27.980 01:24:23 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.980 01:24:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:27.980 ************************************ 00:11:27.980 START TEST xnvme_to_malloc_dd_copy 00:11:27.980 ************************************ 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:27.980 01:24:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:27.980 { 00:11:27.980 "subsystems": [ 00:11:27.980 { 00:11:27.980 "subsystem": "bdev", 00:11:27.980 "config": [ 00:11:27.980 { 00:11:27.980 "params": { 00:11:27.980 "block_size": 512, 00:11:27.980 "num_blocks": 2097152, 00:11:27.980 "name": "malloc0" 00:11:27.980 }, 00:11:27.980 "method": "bdev_malloc_create" 00:11:27.980 }, 00:11:27.980 { 00:11:27.980 "params": { 00:11:27.980 "io_mechanism": "libaio", 00:11:27.980 "filename": "/dev/nullb0", 00:11:27.980 "name": "null0" 00:11:27.980 }, 00:11:27.980 "method": "bdev_xnvme_create" 00:11:27.980 }, 00:11:27.980 { 00:11:27.980 "method": "bdev_wait_for_examine" 00:11:27.980 } 00:11:27.980 ] 00:11:27.980 } 00:11:27.980 ] 00:11:27.980 } 00:11:27.980 [2024-09-28 01:24:23.909284] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:27.980 [2024-09-28 01:24:23.909407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68951 ] 00:11:28.239 [2024-09-28 01:24:24.059046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.497 [2024-09-28 01:24:24.206809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.743  Copying: 270/1024 [MB] (270 MBps) Copying: 572/1024 [MB] (301 MBps) Copying: 873/1024 [MB] (301 MBps) Copying: 1024/1024 [MB] (average 292 MBps) 00:11:34.743 00:11:34.743 01:24:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:34.743 01:24:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:34.743 01:24:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:34.743 01:24:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:34.743 { 00:11:34.743 "subsystems": [ 00:11:34.743 { 00:11:34.743 "subsystem": "bdev", 00:11:34.743 "config": [ 00:11:34.743 { 00:11:34.743 "params": { 00:11:34.743 "block_size": 512, 00:11:34.743 "num_blocks": 2097152, 00:11:34.743 "name": "malloc0" 00:11:34.743 }, 00:11:34.743 "method": "bdev_malloc_create" 00:11:34.743 }, 00:11:34.743 { 00:11:34.743 "params": { 00:11:34.743 "io_mechanism": "libaio", 00:11:34.743 "filename": "/dev/nullb0", 00:11:34.743 "name": "null0" 00:11:34.743 }, 00:11:34.743 "method": "bdev_xnvme_create" 00:11:34.743 }, 00:11:34.743 { 00:11:34.743 "method": "bdev_wait_for_examine" 00:11:34.743 } 00:11:34.743 ] 00:11:34.743 } 00:11:34.743 ] 00:11:34.743 } 00:11:34.743 [2024-09-28 01:24:30.626409] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:34.743 [2024-09-28 01:24:30.626526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69035 ] 00:11:35.002 [2024-09-28 01:24:30.774906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.002 [2024-09-28 01:24:30.920172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.266  Copying: 304/1024 [MB] (304 MBps) Copying: 609/1024 [MB] (304 MBps) Copying: 913/1024 [MB] (304 MBps) Copying: 1024/1024 [MB] (average 304 MBps) 00:11:41.266 00:11:41.266 01:24:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:41.266 01:24:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:41.266 01:24:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:41.266 01:24:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:41.266 01:24:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:41.266 01:24:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:41.266 { 00:11:41.266 "subsystems": [ 00:11:41.266 { 00:11:41.266 "subsystem": "bdev", 00:11:41.266 "config": [ 00:11:41.266 { 00:11:41.266 "params": { 00:11:41.266 "block_size": 512, 00:11:41.266 "num_blocks": 2097152, 00:11:41.266 "name": "malloc0" 00:11:41.266 }, 00:11:41.266 "method": "bdev_malloc_create" 00:11:41.266 }, 00:11:41.266 { 00:11:41.266 "params": { 00:11:41.266 "io_mechanism": "io_uring", 00:11:41.266 "filename": "/dev/nullb0", 00:11:41.266 "name": "null0" 00:11:41.267 }, 00:11:41.267 "method": "bdev_xnvme_create" 00:11:41.267 }, 00:11:41.267 { 00:11:41.267 "method": "bdev_wait_for_examine" 00:11:41.267 } 00:11:41.267 ] 00:11:41.267 } 00:11:41.267 ] 00:11:41.267 } 00:11:41.267 [2024-09-28 01:24:37.126612] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:41.267 [2024-09-28 01:24:37.126702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69112 ] 00:11:41.525 [2024-09-28 01:24:37.268185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.525 [2024-09-28 01:24:37.414648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.082  Copying: 312/1024 [MB] (312 MBps) Copying: 622/1024 [MB] (310 MBps) Copying: 935/1024 [MB] (312 MBps) Copying: 1024/1024 [MB] (average 311 MBps) 00:11:48.082 00:11:48.082 01:24:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:48.082 01:24:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:48.082 01:24:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:48.082 01:24:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:48.082 { 00:11:48.082 "subsystems": [ 00:11:48.082 { 00:11:48.082 "subsystem": "bdev", 00:11:48.082 "config": [ 00:11:48.082 { 00:11:48.082 "params": { 00:11:48.082 "block_size": 512, 00:11:48.082 "num_blocks": 2097152, 00:11:48.082 "name": "malloc0" 00:11:48.082 }, 00:11:48.082 "method": "bdev_malloc_create" 00:11:48.082 }, 00:11:48.082 { 00:11:48.082 "params": { 00:11:48.082 "io_mechanism": "io_uring", 00:11:48.082 "filename": "/dev/nullb0", 00:11:48.082 "name": "null0" 00:11:48.082 }, 00:11:48.082 "method": "bdev_xnvme_create" 00:11:48.082 }, 00:11:48.082 { 00:11:48.082 "method": "bdev_wait_for_examine" 00:11:48.082 } 00:11:48.082 ] 00:11:48.082 } 00:11:48.082 ] 00:11:48.082 } 00:11:48.082 [2024-09-28 01:24:43.481946] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:48.082 [2024-09-28 01:24:43.482066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69187 ] 00:11:48.082 [2024-09-28 01:24:43.631473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.082 [2024-09-28 01:24:43.789330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.385  Copying: 314/1024 [MB] (314 MBps) Copying: 629/1024 [MB] (315 MBps) Copying: 944/1024 [MB] (315 MBps) Copying: 1024/1024 [MB] (average 314 MBps) 00:11:54.385 00:11:54.385 01:24:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:54.385 01:24:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:11:54.385 00:11:54.385 real 0m25.967s 00:11:54.385 user 0m22.913s 00:11:54.385 sys 0m2.544s 00:11:54.385 01:24:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.385 ************************************ 00:11:54.385 END TEST xnvme_to_malloc_dd_copy 00:11:54.385 ************************************ 00:11:54.385 01:24:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:54.385 01:24:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:54.385 01:24:49 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:54.385 01:24:49 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.385 01:24:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:54.385 ************************************ 00:11:54.385 START TEST xnvme_bdevperf 00:11:54.385 ************************************ 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:54.385 01:24:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:54.385 { 00:11:54.385 "subsystems": [ 00:11:54.385 { 00:11:54.385 "subsystem": "bdev", 00:11:54.385 "config": [ 00:11:54.385 { 00:11:54.385 "params": { 00:11:54.385 "io_mechanism": "libaio", 00:11:54.385 "filename": "/dev/nullb0", 00:11:54.385 "name": "null0" 00:11:54.385 }, 00:11:54.385 "method": "bdev_xnvme_create" 00:11:54.385 }, 00:11:54.385 { 00:11:54.385 "method": "bdev_wait_for_examine" 00:11:54.385 } 00:11:54.385 ] 00:11:54.385 } 00:11:54.385 ] 00:11:54.385 } 00:11:54.385 [2024-09-28 01:24:49.923542] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:54.385 [2024-09-28 01:24:49.923662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69293 ] 00:11:54.385 [2024-09-28 01:24:50.075173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.385 [2024-09-28 01:24:50.229217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.643 Running I/O for 5 seconds... 00:11:59.797 200064.00 IOPS, 781.50 MiB/s 200704.00 IOPS, 784.00 MiB/s 200746.67 IOPS, 784.17 MiB/s 200880.00 IOPS, 784.69 MiB/s 200896.00 IOPS, 784.75 MiB/s 00:11:59.797 Latency(us) 00:11:59.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.797 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:59.797 null0 : 5.00 200831.76 784.50 0.00 0.00 316.32 310.35 1663.61 00:11:59.797 =================================================================================================================== 00:11:59.797 Total : 200831.76 784.50 0.00 0.00 316.32 310.35 1663.61 00:12:00.362 01:24:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:00.362 01:24:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:00.362 01:24:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:00.362 01:24:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:00.362 01:24:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:00.362 01:24:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:00.362 { 00:12:00.362 "subsystems": [ 00:12:00.362 { 00:12:00.362 "subsystem": "bdev", 00:12:00.362 "config": [ 00:12:00.362 { 00:12:00.362 "params": { 00:12:00.362 "io_mechanism": "io_uring", 00:12:00.362 "filename": "/dev/nullb0", 00:12:00.362 "name": "null0" 00:12:00.362 }, 00:12:00.362 "method": "bdev_xnvme_create" 00:12:00.362 }, 00:12:00.362 { 00:12:00.362 "method": "bdev_wait_for_examine" 00:12:00.362 } 00:12:00.362 ] 00:12:00.362 } 00:12:00.362 ] 00:12:00.362 } 00:12:00.362 [2024-09-28 01:24:56.138118] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:00.362 [2024-09-28 01:24:56.138243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69367 ] 00:12:00.362 [2024-09-28 01:24:56.287225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.620 [2024-09-28 01:24:56.432483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.878 Running I/O for 5 seconds... 00:12:05.983 231552.00 IOPS, 904.50 MiB/s 231424.00 IOPS, 904.00 MiB/s 231381.33 IOPS, 903.83 MiB/s 231184.00 IOPS, 903.06 MiB/s 00:12:05.983 Latency(us) 00:12:05.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.983 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:05.983 null0 : 5.00 231102.01 902.74 0.00 0.00 274.55 163.84 1518.67 00:12:05.983 =================================================================================================================== 00:12:05.983 Total : 231102.01 902.74 0.00 0.00 274.55 163.84 1518.67 00:12:06.552 01:25:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:06.552 01:25:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:06.552 00:12:06.552 real 0m12.448s 00:12:06.552 user 0m9.978s 00:12:06.552 sys 0m2.241s 00:12:06.552 ************************************ 00:12:06.552 END TEST xnvme_bdevperf 00:12:06.552 01:25:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.552 01:25:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:06.552 ************************************ 00:12:06.552 00:12:06.552 real 0m38.690s 00:12:06.552 user 0m33.008s 00:12:06.552 sys 0m4.902s 00:12:06.552 ************************************ 00:12:06.552 END TEST nvme_xnvme 00:12:06.552 ************************************ 00:12:06.552 01:25:02 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.552 01:25:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:06.552 01:25:02 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:06.552 01:25:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:06.552 01:25:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.552 01:25:02 -- common/autotest_common.sh@10 -- # set +x 00:12:06.552 ************************************ 00:12:06.552 START TEST blockdev_xnvme 00:12:06.552 ************************************ 00:12:06.552 01:25:02 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:06.552 * Looking for test storage... 00:12:06.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:06.552 01:25:02 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:06.552 01:25:02 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:06.552 01:25:02 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:06.813 01:25:02 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:06.813 01:25:02 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.814 01:25:02 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:06.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.814 --rc genhtml_branch_coverage=1 00:12:06.814 --rc genhtml_function_coverage=1 00:12:06.814 --rc genhtml_legend=1 00:12:06.814 --rc geninfo_all_blocks=1 00:12:06.814 --rc geninfo_unexecuted_blocks=1 00:12:06.814 00:12:06.814 ' 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:06.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.814 --rc genhtml_branch_coverage=1 00:12:06.814 --rc genhtml_function_coverage=1 00:12:06.814 --rc genhtml_legend=1 00:12:06.814 --rc geninfo_all_blocks=1 00:12:06.814 --rc geninfo_unexecuted_blocks=1 00:12:06.814 00:12:06.814 ' 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:06.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.814 --rc genhtml_branch_coverage=1 00:12:06.814 --rc genhtml_function_coverage=1 00:12:06.814 --rc genhtml_legend=1 00:12:06.814 --rc geninfo_all_blocks=1 00:12:06.814 --rc geninfo_unexecuted_blocks=1 00:12:06.814 00:12:06.814 ' 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:06.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.814 --rc genhtml_branch_coverage=1 00:12:06.814 --rc genhtml_function_coverage=1 00:12:06.814 --rc genhtml_legend=1 00:12:06.814 --rc geninfo_all_blocks=1 00:12:06.814 --rc geninfo_unexecuted_blocks=1 00:12:06.814 00:12:06.814 ' 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69509 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69509 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69509 ']' 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.814 01:25:02 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:06.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:06.814 01:25:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:06.814 [2024-09-28 01:25:02.637161] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:06.814 [2024-09-28 01:25:02.637330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69509 ] 00:12:07.073 [2024-09-28 01:25:02.789423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.073 [2024-09-28 01:25:02.946587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.640 01:25:03 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:07.640 01:25:03 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:07.640 01:25:03 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:07.640 01:25:03 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:07.640 01:25:03 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:07.640 01:25:03 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:07.640 01:25:03 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:07.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:08.157 Waiting for block devices as requested 00:12:08.157 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:08.157 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:08.157 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:08.415 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:13.691 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:13.691 nvme0n1 00:12:13.691 nvme1n1 00:12:13.691 nvme2n1 00:12:13.691 nvme2n2 00:12:13.691 nvme2n3 00:12:13.691 nvme3n1 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.691 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.691 01:25:09 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "300530e4-0f2d-4c42-bbe2-7683d22b8cc3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "300530e4-0f2d-4c42-bbe2-7683d22b8cc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "7a0a53da-2109-4e60-b1d3-ba73aeb8f63f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7a0a53da-2109-4e60-b1d3-ba73aeb8f63f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "31b9b6ee-4115-42e1-bb85-65a9f06a8e59"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31b9b6ee-4115-42e1-bb85-65a9f06a8e59",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "d9ea951a-ce2a-4c90-be50-ac1b448db316"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9ea951a-ce2a-4c90-be50-ac1b448db316",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a598e2db-c1fd-4ce7-8523-70c020dc9cf3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a598e2db-c1fd-4ce7-8523-70c020dc9cf3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7fdfa71b-e6da-48c1-8f94-bdb83cd8ba8e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7fdfa71b-e6da-48c1-8f94-bdb83cd8ba8e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:13.692 01:25:09 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69509 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69509 ']' 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69509 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69509 00:12:13.692 killing process with pid 69509 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69509' 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69509 00:12:13.692 01:25:09 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69509 00:12:15.065 01:25:10 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:15.065 01:25:10 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:15.065 01:25:10 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:15.065 01:25:10 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.065 01:25:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.065 ************************************ 00:12:15.065 START TEST bdev_hello_world 00:12:15.065 ************************************ 00:12:15.065 01:25:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:15.065 [2024-09-28 01:25:10.679628] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:15.065 [2024-09-28 01:25:10.679750] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69863 ] 00:12:15.065 [2024-09-28 01:25:10.830879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.066 [2024-09-28 01:25:10.986407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.632 [2024-09-28 01:25:11.270143] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:15.632 [2024-09-28 01:25:11.270310] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:15.632 [2024-09-28 01:25:11.270333] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:15.632 [2024-09-28 01:25:11.271801] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:15.632 [2024-09-28 01:25:11.272229] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:15.632 [2024-09-28 01:25:11.272258] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:15.632 [2024-09-28 01:25:11.272403] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:15.632 00:12:15.632 [2024-09-28 01:25:11.272416] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:16.200 00:12:16.200 real 0m1.280s 00:12:16.200 user 0m0.999s 00:12:16.200 sys 0m0.170s 00:12:16.200 ************************************ 00:12:16.200 END TEST bdev_hello_world 00:12:16.200 ************************************ 00:12:16.200 01:25:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.200 01:25:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:16.200 01:25:11 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:16.200 01:25:11 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:16.200 01:25:11 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.200 01:25:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.200 ************************************ 00:12:16.200 START TEST bdev_bounds 00:12:16.200 ************************************ 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69899 00:12:16.200 Process bdevio pid: 69899 00:12:16.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69899' 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69899 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 69899 ']' 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.200 01:25:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:16.200 [2024-09-28 01:25:12.011908] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:16.200 [2024-09-28 01:25:12.012036] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69899 ] 00:12:16.459 [2024-09-28 01:25:12.163275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:16.459 [2024-09-28 01:25:12.320227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.459 [2024-09-28 01:25:12.320402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.459 [2024-09-28 01:25:12.320539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.024 01:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.024 01:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:17.024 01:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:17.024 I/O targets: 00:12:17.024 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:17.024 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:17.024 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:17.024 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:17.024 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:17.024 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:17.024 00:12:17.024 00:12:17.024 CUnit - A unit testing framework for C - Version 2.1-3 00:12:17.024 http://cunit.sourceforge.net/ 00:12:17.024 00:12:17.024 00:12:17.024 Suite: bdevio tests on: nvme3n1 00:12:17.024 Test: blockdev write read block ...passed 00:12:17.024 Test: blockdev write zeroes read block ...passed 00:12:17.024 Test: blockdev write zeroes read no split ...passed 00:12:17.283 Test: blockdev write zeroes read split ...passed 00:12:17.283 Test: blockdev write zeroes read split partial ...passed 00:12:17.283 Test: blockdev reset ...passed 00:12:17.283 Test: blockdev write read 8 blocks ...passed 00:12:17.283 Test: blockdev write read size > 128k ...passed 00:12:17.283 Test: blockdev write read invalid size ...passed 00:12:17.283 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:17.283 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:17.283 Test: blockdev write read max offset ...passed 00:12:17.283 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:17.283 Test: blockdev writev readv 8 blocks ...passed 00:12:17.283 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.283 Test: blockdev writev readv block ...passed 00:12:17.283 Test: blockdev writev readv size > 128k ...passed 00:12:17.283 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.283 Test: blockdev comparev and writev ...passed 00:12:17.283 Test: blockdev nvme passthru rw ...passed 00:12:17.283 Test: blockdev nvme passthru vendor specific ...passed 00:12:17.283 Test: blockdev nvme admin passthru ...passed 00:12:17.283 Test: blockdev copy ...passed 00:12:17.283 Suite: bdevio tests on: nvme2n3 00:12:17.283 Test: blockdev write read block ...passed 00:12:17.283 Test: blockdev write zeroes read block ...passed 00:12:17.283 Test: blockdev write zeroes read no split ...passed 00:12:17.283 Test: blockdev write zeroes read split ...passed 00:12:17.283 Test: blockdev write zeroes read split partial ...passed 00:12:17.283 Test: blockdev reset ...passed 00:12:17.283 Test: blockdev write read 8 blocks ...passed 00:12:17.283 Test: blockdev write read size > 128k ...passed 00:12:17.283 Test: blockdev write read invalid size ...passed 00:12:17.283 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:17.283 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:17.283 Test: blockdev write read max offset ...passed 00:12:17.283 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:17.283 Test: blockdev writev readv 8 blocks ...passed 00:12:17.283 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.283 Test: blockdev writev readv block ...passed 00:12:17.283 Test: blockdev writev readv size > 128k ...passed 00:12:17.283 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.283 Test: blockdev comparev and writev ...passed 00:12:17.283 Test: blockdev nvme passthru rw ...passed 00:12:17.283 Test: blockdev nvme passthru vendor specific ...passed 00:12:17.283 Test: blockdev nvme admin passthru ...passed 00:12:17.283 Test: blockdev copy ...passed 00:12:17.283 Suite: bdevio tests on: nvme2n2 00:12:17.283 Test: blockdev write read block ...passed 00:12:17.283 Test: blockdev write zeroes read block ...passed 00:12:17.283 Test: blockdev write zeroes read no split ...passed 00:12:17.283 Test: blockdev write zeroes read split ...passed 00:12:17.283 Test: blockdev write zeroes read split partial ...passed 00:12:17.283 Test: blockdev reset ...passed 00:12:17.283 Test: blockdev write read 8 blocks ...passed 00:12:17.283 Test: blockdev write read size > 128k ...passed 00:12:17.283 Test: blockdev write read invalid size ...passed 00:12:17.283 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:17.283 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:17.283 Test: blockdev write read max offset ...passed 00:12:17.283 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:17.283 Test: blockdev writev readv 8 blocks ...passed 00:12:17.283 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.283 Test: blockdev writev readv block ...passed 00:12:17.283 Test: blockdev writev readv size > 128k ...passed 00:12:17.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.284 Test: blockdev comparev and writev ...passed 00:12:17.284 Test: blockdev nvme passthru rw ...passed 00:12:17.284 Test: blockdev nvme passthru vendor specific ...passed 00:12:17.284 Test: blockdev nvme admin passthru ...passed 00:12:17.284 Test: blockdev copy ...passed 00:12:17.284 Suite: bdevio tests on: nvme2n1 00:12:17.284 Test: blockdev write read block ...passed 00:12:17.284 Test: blockdev write zeroes read block ...passed 00:12:17.284 Test: blockdev write zeroes read no split ...passed 00:12:17.284 Test: blockdev write zeroes read split ...passed 00:12:17.284 Test: blockdev write zeroes read split partial ...passed 00:12:17.284 Test: blockdev reset ...passed 00:12:17.284 Test: blockdev write read 8 blocks ...passed 00:12:17.284 Test: blockdev write read size > 128k ...passed 00:12:17.284 Test: blockdev write read invalid size ...passed 00:12:17.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:17.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:17.284 Test: blockdev write read max offset ...passed 00:12:17.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:17.284 Test: blockdev writev readv 8 blocks ...passed 00:12:17.284 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.284 Test: blockdev writev readv block ...passed 00:12:17.284 Test: blockdev writev readv size > 128k ...passed 00:12:17.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.284 Test: blockdev comparev and writev ...passed 00:12:17.284 Test: blockdev nvme passthru rw ...passed 00:12:17.284 Test: blockdev nvme passthru vendor specific ...passed 00:12:17.284 Test: blockdev nvme admin passthru ...passed 00:12:17.284 Test: blockdev copy ...passed 00:12:17.284 Suite: bdevio tests on: nvme1n1 00:12:17.284 Test: blockdev write read block ...passed 00:12:17.284 Test: blockdev write zeroes read block ...passed 00:12:17.284 Test: blockdev write zeroes read no split ...passed 00:12:17.284 Test: blockdev write zeroes read split ...passed 00:12:17.284 Test: blockdev write zeroes read split partial ...passed 00:12:17.284 Test: blockdev reset ...passed 00:12:17.284 Test: blockdev write read 8 blocks ...passed 00:12:17.284 Test: blockdev write read size > 128k ...passed 00:12:17.284 Test: blockdev write read invalid size ...passed 00:12:17.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:17.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:17.284 Test: blockdev write read max offset ...passed 00:12:17.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:17.284 Test: blockdev writev readv 8 blocks ...passed 00:12:17.284 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.284 Test: blockdev writev readv block ...passed 00:12:17.284 Test: blockdev writev readv size > 128k ...passed 00:12:17.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.284 Test: blockdev comparev and writev ...passed 00:12:17.284 Test: blockdev nvme passthru rw ...passed 00:12:17.284 Test: blockdev nvme passthru vendor specific ...passed 00:12:17.284 Test: blockdev nvme admin passthru ...passed 00:12:17.284 Test: blockdev copy ...passed 00:12:17.284 Suite: bdevio tests on: nvme0n1 00:12:17.284 Test: blockdev write read block ...passed 00:12:17.284 Test: blockdev write zeroes read block ...passed 00:12:17.542 Test: blockdev write zeroes read no split ...passed 00:12:17.542 Test: blockdev write zeroes read split ...passed 00:12:17.542 Test: blockdev write zeroes read split partial ...passed 00:12:17.542 Test: blockdev reset ...passed 00:12:17.542 Test: blockdev write read 8 blocks ...passed 00:12:17.542 Test: blockdev write read size > 128k ...passed 00:12:17.542 Test: blockdev write read invalid size ...passed 00:12:17.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:17.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:17.542 Test: blockdev write read max offset ...passed 00:12:17.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:17.542 Test: blockdev writev readv 8 blocks ...passed 00:12:17.542 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.542 Test: blockdev writev readv block ...passed 00:12:17.542 Test: blockdev writev readv size > 128k ...passed 00:12:17.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.542 Test: blockdev comparev and writev ...passed 00:12:17.542 Test: blockdev nvme passthru rw ...passed 00:12:17.542 Test: blockdev nvme passthru vendor specific ...passed 00:12:17.542 Test: blockdev nvme admin passthru ...passed 00:12:17.542 Test: blockdev copy ...passed 00:12:17.542 00:12:17.542 Run Summary: Type Total Ran Passed Failed Inactive 00:12:17.542 suites 6 6 n/a 0 0 00:12:17.542 tests 138 138 138 0 0 00:12:17.542 asserts 780 780 780 0 n/a 00:12:17.542 00:12:17.542 Elapsed time = 0.878 seconds 00:12:17.542 0 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69899 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 69899 ']' 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 69899 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69899 00:12:17.542 killing process with pid 69899 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69899' 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 69899 00:12:17.542 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 69899 00:12:18.109 01:25:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:18.109 00:12:18.109 real 0m1.992s 00:12:18.109 user 0m4.757s 00:12:18.109 sys 0m0.308s 00:12:18.109 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.109 ************************************ 00:12:18.109 END TEST bdev_bounds 00:12:18.109 ************************************ 00:12:18.109 01:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:18.109 01:25:13 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:18.110 01:25:13 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:18.110 01:25:13 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.110 01:25:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.110 ************************************ 00:12:18.110 START TEST bdev_nbd 00:12:18.110 ************************************ 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:18.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69953 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69953 /var/tmp/spdk-nbd.sock 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 69953 ']' 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:18.110 01:25:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:18.369 [2024-09-28 01:25:14.064741] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:18.369 [2024-09-28 01:25:14.065003] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.369 [2024-09-28 01:25:14.215967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.628 [2024-09-28 01:25:14.374050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:19.194 01:25:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.452 1+0 records in 00:12:19.452 1+0 records out 00:12:19.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285379 s, 14.4 MB/s 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.452 1+0 records in 00:12:19.452 1+0 records out 00:12:19.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463349 s, 8.8 MB/s 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:19.452 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.453 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:19.453 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.711 1+0 records in 00:12:19.711 1+0 records out 00:12:19.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412014 s, 9.9 MB/s 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:19.711 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.969 1+0 records in 00:12:19.969 1+0 records out 00:12:19.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457226 s, 9.0 MB/s 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:19.969 01:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.227 1+0 records in 00:12:20.227 1+0 records out 00:12:20.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281996 s, 14.5 MB/s 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:20.227 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.485 1+0 records in 00:12:20.485 1+0 records out 00:12:20.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382857 s, 10.7 MB/s 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:20.485 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:20.742 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:20.742 { 00:12:20.742 "nbd_device": "/dev/nbd0", 00:12:20.742 "bdev_name": "nvme0n1" 00:12:20.742 }, 00:12:20.742 { 00:12:20.742 "nbd_device": "/dev/nbd1", 00:12:20.742 "bdev_name": "nvme1n1" 00:12:20.742 }, 00:12:20.742 { 00:12:20.742 "nbd_device": "/dev/nbd2", 00:12:20.742 "bdev_name": "nvme2n1" 00:12:20.742 }, 00:12:20.742 { 00:12:20.742 "nbd_device": "/dev/nbd3", 00:12:20.742 "bdev_name": "nvme2n2" 00:12:20.742 }, 00:12:20.742 { 00:12:20.743 "nbd_device": "/dev/nbd4", 00:12:20.743 "bdev_name": "nvme2n3" 00:12:20.743 }, 00:12:20.743 { 00:12:20.743 "nbd_device": "/dev/nbd5", 00:12:20.743 "bdev_name": "nvme3n1" 00:12:20.743 } 00:12:20.743 ]' 00:12:20.743 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:20.743 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:20.743 { 00:12:20.743 "nbd_device": "/dev/nbd0", 00:12:20.743 "bdev_name": "nvme0n1" 00:12:20.743 }, 00:12:20.743 { 00:12:20.743 "nbd_device": "/dev/nbd1", 00:12:20.743 "bdev_name": "nvme1n1" 00:12:20.743 }, 00:12:20.743 { 00:12:20.743 "nbd_device": "/dev/nbd2", 00:12:20.743 "bdev_name": "nvme2n1" 00:12:20.743 }, 00:12:20.743 { 00:12:20.743 "nbd_device": "/dev/nbd3", 00:12:20.743 "bdev_name": "nvme2n2" 00:12:20.743 }, 00:12:20.743 { 00:12:20.743 "nbd_device": "/dev/nbd4", 00:12:20.743 "bdev_name": "nvme2n3" 00:12:20.743 }, 00:12:20.743 { 00:12:20.743 "nbd_device": "/dev/nbd5", 00:12:20.743 "bdev_name": "nvme3n1" 00:12:20.743 } 00:12:20.743 ]' 00:12:20.743 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:20.743 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:20.743 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:20.743 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:20.743 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:20.743 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:20.743 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.743 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:21.000 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:21.000 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:21.000 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:21.000 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.000 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.000 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:21.000 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.001 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.001 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.001 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:21.259 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:21.259 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:21.259 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:21.259 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.259 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.259 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:21.259 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.259 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.259 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.259 01:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:21.259 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:21.259 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:21.259 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:21.259 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.259 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.259 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:21.259 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.259 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.259 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.259 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:21.517 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:21.517 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:21.517 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:21.517 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.518 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.518 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:21.518 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.518 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.518 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.518 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:21.776 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:21.776 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:21.776 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:21.776 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.776 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.776 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:21.776 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:21.776 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.776 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:21.776 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.034 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:22.294 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:22.294 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:22.294 01:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:22.294 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:22.553 /dev/nbd0 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.553 1+0 records in 00:12:22.553 1+0 records out 00:12:22.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641664 s, 6.4 MB/s 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.553 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:22.554 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:22.554 /dev/nbd1 00:12:22.554 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.811 1+0 records in 00:12:22.811 1+0 records out 00:12:22.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580955 s, 7.1 MB/s 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:22.811 /dev/nbd10 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.811 1+0 records in 00:12:22.811 1+0 records out 00:12:22.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469123 s, 8.7 MB/s 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:22.811 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:23.069 /dev/nbd11 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:23.069 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.069 1+0 records in 00:12:23.069 1+0 records out 00:12:23.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106028 s, 3.9 MB/s 00:12:23.070 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.070 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:23.070 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.070 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:23.070 01:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:23.070 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.070 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:23.070 01:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:23.330 /dev/nbd12 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.330 1+0 records in 00:12:23.330 1+0 records out 00:12:23.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000898169 s, 4.6 MB/s 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:23.330 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:23.589 /dev/nbd13 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.589 1+0 records in 00:12:23.589 1+0 records out 00:12:23.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100472 s, 4.1 MB/s 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:23.589 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd0", 00:12:23.849 "bdev_name": "nvme0n1" 00:12:23.849 }, 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd1", 00:12:23.849 "bdev_name": "nvme1n1" 00:12:23.849 }, 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd10", 00:12:23.849 "bdev_name": "nvme2n1" 00:12:23.849 }, 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd11", 00:12:23.849 "bdev_name": "nvme2n2" 00:12:23.849 }, 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd12", 00:12:23.849 "bdev_name": "nvme2n3" 00:12:23.849 }, 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd13", 00:12:23.849 "bdev_name": "nvme3n1" 00:12:23.849 } 00:12:23.849 ]' 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd0", 00:12:23.849 "bdev_name": "nvme0n1" 00:12:23.849 }, 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd1", 00:12:23.849 "bdev_name": "nvme1n1" 00:12:23.849 }, 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd10", 00:12:23.849 "bdev_name": "nvme2n1" 00:12:23.849 }, 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd11", 00:12:23.849 "bdev_name": "nvme2n2" 00:12:23.849 }, 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd12", 00:12:23.849 "bdev_name": "nvme2n3" 00:12:23.849 }, 00:12:23.849 { 00:12:23.849 "nbd_device": "/dev/nbd13", 00:12:23.849 "bdev_name": "nvme3n1" 00:12:23.849 } 00:12:23.849 ]' 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:23.849 /dev/nbd1 00:12:23.849 /dev/nbd10 00:12:23.849 /dev/nbd11 00:12:23.849 /dev/nbd12 00:12:23.849 /dev/nbd13' 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:23.849 /dev/nbd1 00:12:23.849 /dev/nbd10 00:12:23.849 /dev/nbd11 00:12:23.849 /dev/nbd12 00:12:23.849 /dev/nbd13' 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:23.849 256+0 records in 00:12:23.849 256+0 records out 00:12:23.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059953 s, 175 MB/s 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:23.849 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:24.109 256+0 records in 00:12:24.109 256+0 records out 00:12:24.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218135 s, 4.8 MB/s 00:12:24.109 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:24.109 01:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:24.370 256+0 records in 00:12:24.370 256+0 records out 00:12:24.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.297217 s, 3.5 MB/s 00:12:24.370 01:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:24.370 01:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:24.631 256+0 records in 00:12:24.631 256+0 records out 00:12:24.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174492 s, 6.0 MB/s 00:12:24.631 01:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:24.631 01:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:24.893 256+0 records in 00:12:24.893 256+0 records out 00:12:24.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.215792 s, 4.9 MB/s 00:12:24.893 01:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:24.893 01:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:24.893 256+0 records in 00:12:24.893 256+0 records out 00:12:24.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178249 s, 5.9 MB/s 00:12:24.893 01:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:24.893 01:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:25.154 256+0 records in 00:12:25.154 256+0 records out 00:12:25.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204879 s, 5.1 MB/s 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.154 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:25.414 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:25.414 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:25.414 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:25.414 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.414 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.414 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:25.414 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:25.414 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.414 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.414 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:25.674 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:25.674 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:25.674 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:25.675 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.675 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.675 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:25.675 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:25.675 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.675 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.675 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:25.936 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:25.936 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:25.936 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:25.936 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.936 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.936 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:25.936 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:25.936 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.936 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.936 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:26.201 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:26.201 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:26.201 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:26.201 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.201 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.201 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:26.201 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:26.201 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.201 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.201 01:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:26.201 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:26.201 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:26.201 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:26.201 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.201 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.201 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:26.201 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:26.201 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.201 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.201 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.506 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:26.778 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:27.038 malloc_lvol_verify 00:12:27.038 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:27.038 76ef1713-c4cd-4bb1-a55d-b1344b73c8c0 00:12:27.038 01:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:27.299 42bbc78c-b951-4c39-82d0-3c9e83d295d0 00:12:27.299 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:27.560 /dev/nbd0 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:27.560 mke2fs 1.47.0 (5-Feb-2023) 00:12:27.560 Discarding device blocks: 0/4096 done 00:12:27.560 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:27.560 00:12:27.560 Allocating group tables: 0/1 done 00:12:27.560 Writing inode tables: 0/1 done 00:12:27.560 Creating journal (1024 blocks): done 00:12:27.560 Writing superblocks and filesystem accounting information: 0/1 done 00:12:27.560 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.560 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69953 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 69953 ']' 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 69953 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69953 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:27.822 killing process with pid 69953 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69953' 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 69953 00:12:27.822 01:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 69953 00:12:28.768 01:25:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:28.768 00:12:28.768 real 0m10.506s 00:12:28.768 user 0m14.180s 00:12:28.768 sys 0m3.487s 00:12:28.768 ************************************ 00:12:28.768 END TEST bdev_nbd 00:12:28.768 ************************************ 00:12:28.768 01:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.768 01:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:28.768 01:25:24 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:28.768 01:25:24 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:28.768 01:25:24 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:28.768 01:25:24 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:28.768 01:25:24 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:28.768 01:25:24 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.768 01:25:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.768 ************************************ 00:12:28.768 START TEST bdev_fio 00:12:28.768 ************************************ 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:12:28.768 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:28.768 ************************************ 00:12:28.768 START TEST bdev_fio_rw_verify 00:12:28.768 ************************************ 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:28.768 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:12:28.769 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:28.769 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:28.769 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:28.769 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:28.769 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:12:28.769 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:28.769 01:25:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:29.030 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.030 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.030 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.030 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.030 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.030 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.030 fio-3.35 00:12:29.030 Starting 6 threads 00:12:41.267 00:12:41.267 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70355: Sat Sep 28 01:25:35 2024 00:12:41.267 read: IOPS=15.2k, BW=59.3MiB/s (62.2MB/s)(593MiB/10002msec) 00:12:41.267 slat (usec): min=2, max=2344, avg= 5.66, stdev=16.48 00:12:41.267 clat (usec): min=62, max=8205, avg=1282.33, stdev=767.64 00:12:41.267 lat (usec): min=66, max=8225, avg=1287.99, stdev=768.12 00:12:41.267 clat percentiles (usec): 00:12:41.267 | 50.000th=[ 1172], 99.000th=[ 3654], 99.900th=[ 5145], 99.990th=[ 6587], 00:12:41.267 | 99.999th=[ 8225] 00:12:41.267 write: IOPS=15.5k, BW=60.6MiB/s (63.6MB/s)(607MiB/10002msec); 0 zone resets 00:12:41.267 slat (usec): min=12, max=4331, avg=38.75, stdev=135.53 00:12:41.267 clat (usec): min=85, max=15092, avg=1523.67, stdev=836.31 00:12:41.267 lat (usec): min=106, max=15122, avg=1562.42, stdev=848.01 00:12:41.267 clat percentiles (usec): 00:12:41.267 | 50.000th=[ 1401], 99.000th=[ 4047], 99.900th=[ 5735], 99.990th=[ 9110], 00:12:41.267 | 99.999th=[15008] 00:12:41.267 bw ( KiB/s): min=48847, max=97770, per=100.00%, avg=62284.32, stdev=2233.30, samples=114 00:12:41.267 iops : min=12208, max=24440, avg=15569.89, stdev=558.28, samples=114 00:12:41.267 lat (usec) : 100=0.02%, 250=3.24%, 500=8.32%, 750=10.03%, 1000=12.09% 00:12:41.267 lat (msec) : 2=46.18%, 4=19.31%, 10=0.81%, 20=0.01% 00:12:41.267 cpu : usr=43.81%, sys=29.66%, ctx=5607, majf=0, minf=15226 00:12:41.267 IO depths : 1=11.6%, 2=24.0%, 4=51.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.267 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.267 issued rwts: total=151926,155291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.267 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:41.267 00:12:41.267 Run status group 0 (all jobs): 00:12:41.267 READ: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=593MiB (622MB), run=10002-10002msec 00:12:41.267 WRITE: bw=60.6MiB/s (63.6MB/s), 60.6MiB/s-60.6MiB/s (63.6MB/s-63.6MB/s), io=607MiB (636MB), run=10002-10002msec 00:12:41.267 ----------------------------------------------------- 00:12:41.267 Suppressions used: 00:12:41.267 count bytes template 00:12:41.267 6 48 /usr/src/fio/parse.c 00:12:41.267 3260 312960 /usr/src/fio/iolog.c 00:12:41.267 1 8 libtcmalloc_minimal.so 00:12:41.267 1 904 libcrypto.so 00:12:41.267 ----------------------------------------------------- 00:12:41.267 00:12:41.267 00:12:41.267 real 0m11.856s 00:12:41.267 user 0m27.718s 00:12:41.267 sys 0m18.085s 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.267 ************************************ 00:12:41.267 END TEST bdev_fio_rw_verify 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:41.267 ************************************ 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "300530e4-0f2d-4c42-bbe2-7683d22b8cc3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "300530e4-0f2d-4c42-bbe2-7683d22b8cc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "7a0a53da-2109-4e60-b1d3-ba73aeb8f63f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7a0a53da-2109-4e60-b1d3-ba73aeb8f63f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "31b9b6ee-4115-42e1-bb85-65a9f06a8e59"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31b9b6ee-4115-42e1-bb85-65a9f06a8e59",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "d9ea951a-ce2a-4c90-be50-ac1b448db316"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9ea951a-ce2a-4c90-be50-ac1b448db316",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a598e2db-c1fd-4ce7-8523-70c020dc9cf3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a598e2db-c1fd-4ce7-8523-70c020dc9cf3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7fdfa71b-e6da-48c1-8f94-bdb83cd8ba8e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7fdfa71b-e6da-48c1-8f94-bdb83cd8ba8e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:12:41.267 01:25:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:41.267 /home/vagrant/spdk_repo/spdk 00:12:41.268 01:25:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:12:41.268 01:25:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:12:41.268 01:25:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:12:41.268 00:12:41.268 real 0m12.018s 00:12:41.268 user 0m27.787s 00:12:41.268 sys 0m18.159s 00:12:41.268 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.268 ************************************ 00:12:41.268 END TEST bdev_fio 00:12:41.268 ************************************ 00:12:41.268 01:25:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:41.268 01:25:36 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:41.268 01:25:36 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:41.268 01:25:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:41.268 01:25:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.268 01:25:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:41.268 ************************************ 00:12:41.268 START TEST bdev_verify 00:12:41.268 ************************************ 00:12:41.268 01:25:36 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:41.268 [2024-09-28 01:25:36.708213] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:41.268 [2024-09-28 01:25:36.708329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70526 ] 00:12:41.268 [2024-09-28 01:25:36.857914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:41.268 [2024-09-28 01:25:37.073805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.268 [2024-09-28 01:25:37.073904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.839 Running I/O for 5 seconds... 00:12:46.669 23488.00 IOPS, 91.75 MiB/s 23088.00 IOPS, 90.19 MiB/s 23669.33 IOPS, 92.46 MiB/s 23527.75 IOPS, 91.91 MiB/s 23801.60 IOPS, 92.98 MiB/s 00:12:46.669 Latency(us) 00:12:46.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.669 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0x0 length 0xa0000 00:12:46.669 nvme0n1 : 5.07 1918.29 7.49 0.00 0.00 66603.91 6704.84 69770.63 00:12:46.669 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0xa0000 length 0xa0000 00:12:46.669 nvme0n1 : 5.05 1850.23 7.23 0.00 0.00 69055.03 9527.93 72593.72 00:12:46.669 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0x0 length 0xbd0bd 00:12:46.669 nvme1n1 : 5.06 2312.77 9.03 0.00 0.00 55031.22 6704.84 60091.47 00:12:46.669 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:46.669 nvme1n1 : 5.06 2338.39 9.13 0.00 0.00 54457.30 3780.92 56865.08 00:12:46.669 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0x0 length 0x80000 00:12:46.669 nvme2n1 : 5.07 1968.09 7.69 0.00 0.00 64713.23 9427.10 65334.35 00:12:46.669 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0x80000 length 0x80000 00:12:46.669 nvme2n1 : 5.05 1899.42 7.42 0.00 0.00 66855.22 6604.01 72997.02 00:12:46.669 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0x0 length 0x80000 00:12:46.669 nvme2n2 : 5.08 1941.45 7.58 0.00 0.00 65426.79 8670.92 77433.30 00:12:46.669 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0x80000 length 0x80000 00:12:46.669 nvme2n2 : 5.07 1868.75 7.30 0.00 0.00 67801.10 12804.73 75416.81 00:12:46.669 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0x0 length 0x80000 00:12:46.669 nvme2n3 : 5.08 1914.57 7.48 0.00 0.00 66217.88 10737.82 67754.14 00:12:46.669 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0x80000 length 0x80000 00:12:46.669 nvme2n3 : 5.07 1868.18 7.30 0.00 0.00 67706.00 8771.74 72190.42 00:12:46.669 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0x0 length 0x20000 00:12:46.669 nvme3n1 : 5.08 1913.42 7.47 0.00 0.00 66151.23 4915.20 64527.75 00:12:46.669 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:46.669 Verification LBA range: start 0x20000 length 0x20000 00:12:46.669 nvme3n1 : 5.08 1865.24 7.29 0.00 0.00 67754.53 3881.75 72593.72 00:12:46.669 =================================================================================================================== 00:12:46.669 Total : 23658.80 92.42 0.00 0.00 64436.18 3780.92 77433.30 00:12:47.611 00:12:47.611 real 0m6.880s 00:12:47.611 user 0m11.110s 00:12:47.611 sys 0m1.342s 00:12:47.611 01:25:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.611 01:25:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:47.611 ************************************ 00:12:47.611 END TEST bdev_verify 00:12:47.611 ************************************ 00:12:47.872 01:25:43 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:47.872 01:25:43 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:47.872 01:25:43 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.872 01:25:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.872 ************************************ 00:12:47.872 START TEST bdev_verify_big_io 00:12:47.872 ************************************ 00:12:47.872 01:25:43 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:47.872 [2024-09-28 01:25:43.663860] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:47.872 [2024-09-28 01:25:43.664007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70631 ] 00:12:48.134 [2024-09-28 01:25:43.818706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:48.134 [2024-09-28 01:25:44.054389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.134 [2024-09-28 01:25:44.054490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.705 Running I/O for 5 seconds... 00:12:54.864 1651.00 IOPS, 103.19 MiB/s 2458.00 IOPS, 153.62 MiB/s 3065.33 IOPS, 191.58 MiB/s 00:12:54.864 Latency(us) 00:12:54.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.864 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0x0 length 0xa000 00:12:54.864 nvme0n1 : 5.92 105.47 6.59 0.00 0.00 1166012.12 196003.05 1897115.96 00:12:54.864 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0xa000 length 0xa000 00:12:54.864 nvme0n1 : 6.02 95.72 5.98 0.00 0.00 1262898.59 8116.38 1206669.00 00:12:54.864 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0x0 length 0xbd0b 00:12:54.864 nvme1n1 : 5.92 162.18 10.14 0.00 0.00 740227.70 7511.43 890483.00 00:12:54.864 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:54.864 nvme1n1 : 6.02 167.43 10.46 0.00 0.00 719727.51 28432.54 877577.45 00:12:54.864 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0x0 length 0x8000 00:12:54.864 nvme2n1 : 5.92 151.30 9.46 0.00 0.00 770067.35 134701.69 703352.52 00:12:54.864 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0x8000 length 0x8000 00:12:54.864 nvme2n1 : 6.01 125.04 7.82 0.00 0.00 930686.28 46782.62 1361535.61 00:12:54.864 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0x0 length 0x8000 00:12:54.864 nvme2n2 : 6.02 103.62 6.48 0.00 0.00 1080141.21 125022.52 2103604.78 00:12:54.864 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0x8000 length 0x8000 00:12:54.864 nvme2n2 : 6.03 95.69 5.98 0.00 0.00 1180655.20 66947.54 1677721.60 00:12:54.864 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0x0 length 0x8000 00:12:54.864 nvme2n3 : 6.01 122.47 7.65 0.00 0.00 893998.71 50210.66 1922927.06 00:12:54.864 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0x8000 length 0x8000 00:12:54.864 nvme2n3 : 6.03 157.17 9.82 0.00 0.00 699239.04 8015.56 1013085.74 00:12:54.864 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0x0 length 0x2000 00:12:54.864 nvme3n1 : 6.03 167.22 10.45 0.00 0.00 643272.77 4789.17 922746.88 00:12:54.864 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:54.864 Verification LBA range: start 0x2000 length 0x2000 00:12:54.864 nvme3n1 : 6.04 166.96 10.43 0.00 0.00 641430.40 9074.22 1471232.79 00:12:54.864 =================================================================================================================== 00:12:54.864 Total : 1620.27 101.27 0.00 0.00 849255.96 4789.17 2103604.78 00:12:55.807 00:12:55.807 real 0m8.134s 00:12:55.807 user 0m14.572s 00:12:55.807 sys 0m0.552s 00:12:55.807 ************************************ 00:12:55.807 END TEST bdev_verify_big_io 00:12:55.807 ************************************ 00:12:55.807 01:25:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.807 01:25:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.069 01:25:51 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:56.069 01:25:51 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:56.069 01:25:51 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.069 01:25:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.069 ************************************ 00:12:56.069 START TEST bdev_write_zeroes 00:12:56.069 ************************************ 00:12:56.069 01:25:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:56.069 [2024-09-28 01:25:51.872951] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:56.069 [2024-09-28 01:25:51.873099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70741 ] 00:12:56.329 [2024-09-28 01:25:52.027254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.329 [2024-09-28 01:25:52.237585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.901 Running I/O for 1 seconds... 00:12:57.845 87744.00 IOPS, 342.75 MiB/s 00:12:57.845 Latency(us) 00:12:57.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.845 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.845 nvme0n1 : 1.01 14278.41 55.78 0.00 0.00 8954.05 5923.45 25609.45 00:12:57.846 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.846 nvme1n1 : 1.02 15991.89 62.47 0.00 0.00 7987.69 5671.38 18854.20 00:12:57.846 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.846 nvme2n1 : 1.02 14282.23 55.79 0.00 0.00 8873.07 4486.70 20064.10 00:12:57.846 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.846 nvme2n2 : 1.02 14266.06 55.73 0.00 0.00 8877.02 4486.70 21475.64 00:12:57.846 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.846 nvme2n3 : 1.02 14249.78 55.66 0.00 0.00 8879.71 4587.52 22988.01 00:12:57.846 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.846 nvme3n1 : 1.03 14233.67 55.60 0.00 0.00 8882.78 4688.34 23693.78 00:12:57.846 =================================================================================================================== 00:12:57.846 Total : 87302.04 341.02 0.00 0.00 8727.45 4486.70 25609.45 00:12:58.790 00:12:58.790 real 0m2.791s 00:12:58.790 user 0m2.118s 00:12:58.790 sys 0m0.485s 00:12:58.790 ************************************ 00:12:58.790 END TEST bdev_write_zeroes 00:12:58.790 ************************************ 00:12:58.790 01:25:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.790 01:25:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:58.790 01:25:54 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:58.790 01:25:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:58.790 01:25:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.790 01:25:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.790 ************************************ 00:12:58.790 START TEST bdev_json_nonenclosed 00:12:58.790 ************************************ 00:12:58.790 01:25:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:59.051 [2024-09-28 01:25:54.732676] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:59.051 [2024-09-28 01:25:54.732837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70794 ] 00:12:59.051 [2024-09-28 01:25:54.886430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.311 [2024-09-28 01:25:55.111039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.311 [2024-09-28 01:25:55.111171] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:59.311 [2024-09-28 01:25:55.111213] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:59.311 [2024-09-28 01:25:55.111229] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:59.571 00:12:59.571 real 0m0.768s 00:12:59.571 user 0m0.529s 00:12:59.572 sys 0m0.131s 00:12:59.572 ************************************ 00:12:59.572 END TEST bdev_json_nonenclosed 00:12:59.572 ************************************ 00:12:59.572 01:25:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.572 01:25:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:59.572 01:25:55 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:59.572 01:25:55 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:59.572 01:25:55 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.572 01:25:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.572 ************************************ 00:12:59.572 START TEST bdev_json_nonarray 00:12:59.572 ************************************ 00:12:59.572 01:25:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:59.833 [2024-09-28 01:25:55.552689] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:59.833 [2024-09-28 01:25:55.552809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70824 ] 00:12:59.833 [2024-09-28 01:25:55.704873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.095 [2024-09-28 01:25:55.879334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.095 [2024-09-28 01:25:55.879420] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:00.095 [2024-09-28 01:25:55.879437] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:00.095 [2024-09-28 01:25:55.879447] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:00.358 00:13:00.358 real 0m0.673s 00:13:00.358 user 0m0.475s 00:13:00.358 sys 0m0.093s 00:13:00.358 01:25:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.358 ************************************ 00:13:00.358 01:25:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:00.358 END TEST bdev_json_nonarray 00:13:00.358 ************************************ 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:00.358 01:25:56 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:00.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:03.481 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:03.481 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:04.052 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:04.052 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:04.052 00:13:04.052 real 0m57.453s 00:13:04.052 user 1m25.540s 00:13:04.052 sys 0m32.095s 00:13:04.052 01:25:59 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.052 ************************************ 00:13:04.052 END TEST blockdev_xnvme 00:13:04.052 ************************************ 00:13:04.052 01:25:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.052 01:25:59 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:04.052 01:25:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:04.052 01:25:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.052 01:25:59 -- common/autotest_common.sh@10 -- # set +x 00:13:04.052 ************************************ 00:13:04.052 START TEST ublk 00:13:04.052 ************************************ 00:13:04.052 01:25:59 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:04.052 * Looking for test storage... 00:13:04.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:04.052 01:25:59 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:04.052 01:25:59 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:04.052 01:25:59 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:04.312 01:26:00 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:04.312 01:26:00 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.312 01:26:00 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.312 01:26:00 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.312 01:26:00 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.312 01:26:00 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.313 01:26:00 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.313 01:26:00 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.313 01:26:00 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.313 01:26:00 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.313 01:26:00 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.313 01:26:00 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.313 01:26:00 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:04.313 01:26:00 ublk -- scripts/common.sh@345 -- # : 1 00:13:04.313 01:26:00 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.313 01:26:00 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.313 01:26:00 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:04.313 01:26:00 ublk -- scripts/common.sh@353 -- # local d=1 00:13:04.313 01:26:00 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.313 01:26:00 ublk -- scripts/common.sh@355 -- # echo 1 00:13:04.313 01:26:00 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.313 01:26:00 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:04.313 01:26:00 ublk -- scripts/common.sh@353 -- # local d=2 00:13:04.313 01:26:00 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.313 01:26:00 ublk -- scripts/common.sh@355 -- # echo 2 00:13:04.313 01:26:00 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.313 01:26:00 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.313 01:26:00 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.313 01:26:00 ublk -- scripts/common.sh@368 -- # return 0 00:13:04.313 01:26:00 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.313 01:26:00 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.313 --rc genhtml_branch_coverage=1 00:13:04.313 --rc genhtml_function_coverage=1 00:13:04.313 --rc genhtml_legend=1 00:13:04.313 --rc geninfo_all_blocks=1 00:13:04.313 --rc geninfo_unexecuted_blocks=1 00:13:04.313 00:13:04.313 ' 00:13:04.313 01:26:00 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.313 --rc genhtml_branch_coverage=1 00:13:04.313 --rc genhtml_function_coverage=1 00:13:04.313 --rc genhtml_legend=1 00:13:04.313 --rc geninfo_all_blocks=1 00:13:04.313 --rc geninfo_unexecuted_blocks=1 00:13:04.313 00:13:04.313 ' 00:13:04.313 01:26:00 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.313 --rc genhtml_branch_coverage=1 00:13:04.313 --rc genhtml_function_coverage=1 00:13:04.313 --rc genhtml_legend=1 00:13:04.313 --rc geninfo_all_blocks=1 00:13:04.313 --rc geninfo_unexecuted_blocks=1 00:13:04.313 00:13:04.313 ' 00:13:04.313 01:26:00 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:04.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.313 --rc genhtml_branch_coverage=1 00:13:04.313 --rc genhtml_function_coverage=1 00:13:04.313 --rc genhtml_legend=1 00:13:04.313 --rc geninfo_all_blocks=1 00:13:04.313 --rc geninfo_unexecuted_blocks=1 00:13:04.313 00:13:04.313 ' 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:04.313 01:26:00 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:04.313 01:26:00 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:04.313 01:26:00 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:04.313 01:26:00 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:04.313 01:26:00 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:04.313 01:26:00 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:04.313 01:26:00 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:04.313 01:26:00 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:04.313 01:26:00 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:04.313 01:26:00 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:04.313 01:26:00 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.313 01:26:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:04.313 ************************************ 00:13:04.313 START TEST test_save_ublk_config 00:13:04.313 ************************************ 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=71116 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 71116 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71116 ']' 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.313 01:26:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:04.313 [2024-09-28 01:26:00.142731] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:04.313 [2024-09-28 01:26:00.142853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71116 ] 00:13:04.574 [2024-09-28 01:26:00.293906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.574 [2024-09-28 01:26:00.484867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.145 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:05.145 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:05.145 01:26:01 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:05.145 01:26:01 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:05.145 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.145 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:05.406 [2024-09-28 01:26:01.078220] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:05.406 [2024-09-28 01:26:01.078998] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:05.406 malloc0 00:13:05.406 [2024-09-28 01:26:01.142319] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:05.406 [2024-09-28 01:26:01.142394] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:05.406 [2024-09-28 01:26:01.142403] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:05.406 [2024-09-28 01:26:01.142412] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:05.406 [2024-09-28 01:26:01.150407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:05.406 [2024-09-28 01:26:01.150426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:05.406 [2024-09-28 01:26:01.158220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:05.406 [2024-09-28 01:26:01.158314] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:05.406 [2024-09-28 01:26:01.175217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:05.406 0 00:13:05.406 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.406 01:26:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:05.406 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.406 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:05.667 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.667 01:26:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:05.667 "subsystems": [ 00:13:05.667 { 00:13:05.667 "subsystem": "fsdev", 00:13:05.667 "config": [ 00:13:05.667 { 00:13:05.667 "method": "fsdev_set_opts", 00:13:05.667 "params": { 00:13:05.667 "fsdev_io_pool_size": 65535, 00:13:05.667 "fsdev_io_cache_size": 256 00:13:05.667 } 00:13:05.667 } 00:13:05.667 ] 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "subsystem": "keyring", 00:13:05.667 "config": [] 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "subsystem": "iobuf", 00:13:05.667 "config": [ 00:13:05.667 { 00:13:05.667 "method": "iobuf_set_options", 00:13:05.667 "params": { 00:13:05.667 "small_pool_count": 8192, 00:13:05.667 "large_pool_count": 1024, 00:13:05.667 "small_bufsize": 8192, 00:13:05.667 "large_bufsize": 135168 00:13:05.667 } 00:13:05.667 } 00:13:05.667 ] 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "subsystem": "sock", 00:13:05.667 "config": [ 00:13:05.667 { 00:13:05.667 "method": "sock_set_default_impl", 00:13:05.667 "params": { 00:13:05.667 "impl_name": "posix" 00:13:05.667 } 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "method": "sock_impl_set_options", 00:13:05.667 "params": { 00:13:05.667 "impl_name": "ssl", 00:13:05.667 "recv_buf_size": 4096, 00:13:05.667 "send_buf_size": 4096, 00:13:05.667 "enable_recv_pipe": true, 00:13:05.667 "enable_quickack": false, 00:13:05.667 "enable_placement_id": 0, 00:13:05.667 "enable_zerocopy_send_server": true, 00:13:05.667 "enable_zerocopy_send_client": false, 00:13:05.667 "zerocopy_threshold": 0, 00:13:05.667 "tls_version": 0, 00:13:05.667 "enable_ktls": false 00:13:05.667 } 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "method": "sock_impl_set_options", 00:13:05.667 "params": { 00:13:05.667 "impl_name": "posix", 00:13:05.667 "recv_buf_size": 2097152, 00:13:05.667 "send_buf_size": 2097152, 00:13:05.667 "enable_recv_pipe": true, 00:13:05.667 "enable_quickack": false, 00:13:05.667 "enable_placement_id": 0, 00:13:05.667 "enable_zerocopy_send_server": true, 00:13:05.667 "enable_zerocopy_send_client": false, 00:13:05.667 "zerocopy_threshold": 0, 00:13:05.667 "tls_version": 0, 00:13:05.667 "enable_ktls": false 00:13:05.667 } 00:13:05.667 } 00:13:05.667 ] 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "subsystem": "vmd", 00:13:05.667 "config": [] 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "subsystem": "accel", 00:13:05.667 "config": [ 00:13:05.667 { 00:13:05.667 "method": "accel_set_options", 00:13:05.667 "params": { 00:13:05.667 "small_cache_size": 128, 00:13:05.667 "large_cache_size": 16, 00:13:05.667 "task_count": 2048, 00:13:05.667 "sequence_count": 2048, 00:13:05.667 "buf_count": 2048 00:13:05.667 } 00:13:05.667 } 00:13:05.667 ] 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "subsystem": "bdev", 00:13:05.667 "config": [ 00:13:05.667 { 00:13:05.667 "method": "bdev_set_options", 00:13:05.667 "params": { 00:13:05.667 "bdev_io_pool_size": 65535, 00:13:05.667 "bdev_io_cache_size": 256, 00:13:05.667 "bdev_auto_examine": true, 00:13:05.667 "iobuf_small_cache_size": 128, 00:13:05.667 "iobuf_large_cache_size": 16 00:13:05.667 } 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "method": "bdev_raid_set_options", 00:13:05.667 "params": { 00:13:05.667 "process_window_size_kb": 1024, 00:13:05.667 "process_max_bandwidth_mb_sec": 0 00:13:05.667 } 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "method": "bdev_iscsi_set_options", 00:13:05.667 "params": { 00:13:05.667 "timeout_sec": 30 00:13:05.667 } 00:13:05.667 }, 00:13:05.667 { 00:13:05.667 "method": "bdev_nvme_set_options", 00:13:05.667 "params": { 00:13:05.667 "action_on_timeout": "none", 00:13:05.667 "timeout_us": 0, 00:13:05.667 "timeout_admin_us": 0, 00:13:05.667 "keep_alive_timeout_ms": 10000, 00:13:05.667 "arbitration_burst": 0, 00:13:05.667 "low_priority_weight": 0, 00:13:05.667 "medium_priority_weight": 0, 00:13:05.667 "high_priority_weight": 0, 00:13:05.667 "nvme_adminq_poll_period_us": 10000, 00:13:05.667 "nvme_ioq_poll_period_us": 0, 00:13:05.667 "io_queue_requests": 0, 00:13:05.667 "delay_cmd_submit": true, 00:13:05.667 "transport_retry_count": 4, 00:13:05.667 "bdev_retry_count": 3, 00:13:05.667 "transport_ack_timeout": 0, 00:13:05.667 "ctrlr_loss_timeout_sec": 0, 00:13:05.667 "reconnect_delay_sec": 0, 00:13:05.667 "fast_io_fail_timeout_sec": 0, 00:13:05.667 "disable_auto_failback": false, 00:13:05.667 "generate_uuids": false, 00:13:05.667 "transport_tos": 0, 00:13:05.667 "nvme_error_stat": false, 00:13:05.667 "rdma_srq_size": 0, 00:13:05.667 "io_path_stat": false, 00:13:05.667 "allow_accel_sequence": false, 00:13:05.667 "rdma_max_cq_size": 0, 00:13:05.667 "rdma_cm_event_timeout_ms": 0, 00:13:05.667 "dhchap_digests": [ 00:13:05.667 "sha256", 00:13:05.667 "sha384", 00:13:05.667 "sha512" 00:13:05.668 ], 00:13:05.668 "dhchap_dhgroups": [ 00:13:05.668 "null", 00:13:05.668 "ffdhe2048", 00:13:05.668 "ffdhe3072", 00:13:05.668 "ffdhe4096", 00:13:05.668 "ffdhe6144", 00:13:05.668 "ffdhe8192" 00:13:05.668 ] 00:13:05.668 } 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "method": "bdev_nvme_set_hotplug", 00:13:05.668 "params": { 00:13:05.668 "period_us": 100000, 00:13:05.668 "enable": false 00:13:05.668 } 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "method": "bdev_malloc_create", 00:13:05.668 "params": { 00:13:05.668 "name": "malloc0", 00:13:05.668 "num_blocks": 8192, 00:13:05.668 "block_size": 4096, 00:13:05.668 "physical_block_size": 4096, 00:13:05.668 "uuid": "0bbc6340-e56c-4b5f-89c5-0abd5f0c0cf7", 00:13:05.668 "optimal_io_boundary": 0, 00:13:05.668 "md_size": 0, 00:13:05.668 "dif_type": 0, 00:13:05.668 "dif_is_head_of_md": false, 00:13:05.668 "dif_pi_format": 0 00:13:05.668 } 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "method": "bdev_wait_for_examine" 00:13:05.668 } 00:13:05.668 ] 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "subsystem": "scsi", 00:13:05.668 "config": null 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "subsystem": "scheduler", 00:13:05.668 "config": [ 00:13:05.668 { 00:13:05.668 "method": "framework_set_scheduler", 00:13:05.668 "params": { 00:13:05.668 "name": "static" 00:13:05.668 } 00:13:05.668 } 00:13:05.668 ] 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "subsystem": "vhost_scsi", 00:13:05.668 "config": [] 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "subsystem": "vhost_blk", 00:13:05.668 "config": [] 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "subsystem": "ublk", 00:13:05.668 "config": [ 00:13:05.668 { 00:13:05.668 "method": "ublk_create_target", 00:13:05.668 "params": { 00:13:05.668 "cpumask": "1" 00:13:05.668 } 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "method": "ublk_start_disk", 00:13:05.668 "params": { 00:13:05.668 "bdev_name": "malloc0", 00:13:05.668 "ublk_id": 0, 00:13:05.668 "num_queues": 1, 00:13:05.668 "queue_depth": 128 00:13:05.668 } 00:13:05.668 } 00:13:05.668 ] 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "subsystem": "nbd", 00:13:05.668 "config": [] 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "subsystem": "nvmf", 00:13:05.668 "config": [ 00:13:05.668 { 00:13:05.668 "method": "nvmf_set_config", 00:13:05.668 "params": { 00:13:05.668 "discovery_filter": "match_any", 00:13:05.668 "admin_cmd_passthru": { 00:13:05.668 "identify_ctrlr": false 00:13:05.668 }, 00:13:05.668 "dhchap_digests": [ 00:13:05.668 "sha256", 00:13:05.668 "sha384", 00:13:05.668 "sha512" 00:13:05.668 ], 00:13:05.668 "dhchap_dhgroups": [ 00:13:05.668 "null", 00:13:05.668 "ffdhe2048", 00:13:05.668 "ffdhe3072", 00:13:05.668 "ffdhe4096", 00:13:05.668 "ffdhe6144", 00:13:05.668 "ffdhe8192" 00:13:05.668 ] 00:13:05.668 } 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "method": "nvmf_set_max_subsystems", 00:13:05.668 "params": { 00:13:05.668 "max_subsystems": 1024 00:13:05.668 } 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "method": "nvmf_set_crdt", 00:13:05.668 "params": { 00:13:05.668 "crdt1": 0, 00:13:05.668 "crdt2": 0, 00:13:05.668 "crdt3": 0 00:13:05.668 } 00:13:05.668 } 00:13:05.668 ] 00:13:05.668 }, 00:13:05.668 { 00:13:05.668 "subsystem": "iscsi", 00:13:05.668 "config": [ 00:13:05.668 { 00:13:05.668 "method": "iscsi_set_options", 00:13:05.668 "params": { 00:13:05.668 "node_base": "iqn.2016-06.io.spdk", 00:13:05.668 "max_sessions": 128, 00:13:05.668 "max_connections_per_session": 2, 00:13:05.668 "max_queue_depth": 64, 00:13:05.668 "default_time2wait": 2, 00:13:05.668 "default_time2retain": 20, 00:13:05.668 "first_burst_length": 8192, 00:13:05.668 "immediate_data": true, 00:13:05.668 "allow_duplicated_isid": false, 00:13:05.668 "error_recovery_level": 0, 00:13:05.668 "nop_timeout": 60, 00:13:05.668 "nop_in_interval": 30, 00:13:05.668 "disable_chap": false, 00:13:05.668 "require_chap": false, 00:13:05.668 "mutual_chap": false, 00:13:05.668 "chap_group": 0, 00:13:05.668 "max_large_datain_per_connection": 64, 00:13:05.668 "max_r2t_per_connection": 4, 00:13:05.668 "pdu_pool_size": 36864, 00:13:05.668 "immediate_data_pool_size": 16384, 00:13:05.668 "data_out_pool_size": 2048 00:13:05.668 } 00:13:05.668 } 00:13:05.668 ] 00:13:05.668 } 00:13:05.668 ] 00:13:05.668 }' 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 71116 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71116 ']' 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71116 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71116 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.668 killing process with pid 71116 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71116' 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71116 00:13:05.668 01:26:01 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71116 00:13:06.611 [2024-09-28 01:26:02.526337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:06.872 [2024-09-28 01:26:02.566234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:06.872 [2024-09-28 01:26:02.566370] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:06.872 [2024-09-28 01:26:02.574289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:06.872 [2024-09-28 01:26:02.574334] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:06.872 [2024-09-28 01:26:02.574343] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:06.872 [2024-09-28 01:26:02.574367] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:06.872 [2024-09-28 01:26:02.574504] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:08.249 01:26:03 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71170 00:13:08.249 01:26:03 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71170 00:13:08.249 01:26:03 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71170 ']' 00:13:08.249 01:26:03 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.249 01:26:03 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.249 01:26:03 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.249 01:26:03 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.249 01:26:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:08.249 01:26:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:08.249 01:26:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:08.249 "subsystems": [ 00:13:08.249 { 00:13:08.249 "subsystem": "fsdev", 00:13:08.249 "config": [ 00:13:08.249 { 00:13:08.249 "method": "fsdev_set_opts", 00:13:08.249 "params": { 00:13:08.249 "fsdev_io_pool_size": 65535, 00:13:08.249 "fsdev_io_cache_size": 256 00:13:08.249 } 00:13:08.249 } 00:13:08.249 ] 00:13:08.249 }, 00:13:08.249 { 00:13:08.249 "subsystem": "keyring", 00:13:08.249 "config": [] 00:13:08.249 }, 00:13:08.249 { 00:13:08.249 "subsystem": "iobuf", 00:13:08.249 "config": [ 00:13:08.249 { 00:13:08.249 "method": "iobuf_set_options", 00:13:08.249 "params": { 00:13:08.249 "small_pool_count": 8192, 00:13:08.249 "large_pool_count": 1024, 00:13:08.249 "small_bufsize": 8192, 00:13:08.249 "large_bufsize": 135168 00:13:08.249 } 00:13:08.249 } 00:13:08.249 ] 00:13:08.249 }, 00:13:08.249 { 00:13:08.249 "subsystem": "sock", 00:13:08.249 "config": [ 00:13:08.249 { 00:13:08.249 "method": "sock_set_default_impl", 00:13:08.249 "params": { 00:13:08.249 "impl_name": "posix" 00:13:08.249 } 00:13:08.249 }, 00:13:08.249 { 00:13:08.249 "method": "sock_impl_set_options", 00:13:08.249 "params": { 00:13:08.249 "impl_name": "ssl", 00:13:08.249 "recv_buf_size": 4096, 00:13:08.249 "send_buf_size": 4096, 00:13:08.249 "enable_recv_pipe": true, 00:13:08.249 "enable_quickack": false, 00:13:08.249 "enable_placement_id": 0, 00:13:08.249 "enable_zerocopy_send_server": true, 00:13:08.249 "enable_zerocopy_send_client": false, 00:13:08.249 "zerocopy_threshold": 0, 00:13:08.249 "tls_version": 0, 00:13:08.249 "enable_ktls": false 00:13:08.249 } 00:13:08.249 }, 00:13:08.249 { 00:13:08.249 "method": "sock_impl_set_options", 00:13:08.249 "params": { 00:13:08.249 "impl_name": "posix", 00:13:08.249 "recv_buf_size": 2097152, 00:13:08.249 "send_buf_size": 2097152, 00:13:08.249 "enable_recv_pipe": true, 00:13:08.249 "enable_quickack": false, 00:13:08.249 "enable_placement_id": 0, 00:13:08.249 "enable_zerocopy_send_server": true, 00:13:08.249 "enable_zerocopy_send_client": false, 00:13:08.249 "zerocopy_threshold": 0, 00:13:08.249 "tls_version": 0, 00:13:08.249 "enable_ktls": false 00:13:08.249 } 00:13:08.249 } 00:13:08.249 ] 00:13:08.249 }, 00:13:08.249 { 00:13:08.249 "subsystem": "vmd", 00:13:08.249 "config": [] 00:13:08.249 }, 00:13:08.249 { 00:13:08.249 "subsystem": "accel", 00:13:08.249 "config": [ 00:13:08.249 { 00:13:08.249 "method": "accel_set_options", 00:13:08.249 "params": { 00:13:08.249 "small_cache_size": 128, 00:13:08.249 "large_cache_size": 16, 00:13:08.249 "task_count": 2048, 00:13:08.249 "sequence_count": 2048, 00:13:08.249 "buf_count": 2048 00:13:08.249 } 00:13:08.249 } 00:13:08.249 ] 00:13:08.249 }, 00:13:08.249 { 00:13:08.249 "subsystem": "bdev", 00:13:08.249 "config": [ 00:13:08.249 { 00:13:08.249 "method": "bdev_set_options", 00:13:08.249 "params": { 00:13:08.249 "bdev_io_pool_size": 65535, 00:13:08.249 "bdev_io_cache_size": 256, 00:13:08.249 "bdev_auto_examine": true, 00:13:08.249 "iobuf_small_cache_size": 128, 00:13:08.250 "iobuf_large_cache_size": 16 00:13:08.250 } 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "method": "bdev_raid_set_options", 00:13:08.250 "params": { 00:13:08.250 "process_window_size_kb": 1024, 00:13:08.250 "process_max_bandwidth_mb_sec": 0 00:13:08.250 } 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "method": "bdev_iscsi_set_options", 00:13:08.250 "params": { 00:13:08.250 "timeout_sec": 30 00:13:08.250 } 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "method": "bdev_nvme_set_options", 00:13:08.250 "params": { 00:13:08.250 "action_on_timeout": "none", 00:13:08.250 "timeout_us": 0, 00:13:08.250 "timeout_admin_us": 0, 00:13:08.250 "keep_alive_timeout_ms": 10000, 00:13:08.250 "arbitration_burst": 0, 00:13:08.250 "low_priority_weight": 0, 00:13:08.250 "medium_priority_weight": 0, 00:13:08.250 "high_priority_weight": 0, 00:13:08.250 "nvme_adminq_poll_period_us": 10000, 00:13:08.250 "nvme_ioq_poll_period_us": 0, 00:13:08.250 "io_queue_requests": 0, 00:13:08.250 "delay_cmd_submit": true, 00:13:08.250 "transport_retry_count": 4, 00:13:08.250 "bdev_retry_count": 3, 00:13:08.250 "transport_ack_timeout": 0, 00:13:08.250 "ctrlr_loss_timeout_sec": 0, 00:13:08.250 "reconnect_delay_sec": 0, 00:13:08.250 "fast_io_fail_timeout_sec": 0, 00:13:08.250 "disable_auto_failback": false, 00:13:08.250 "generate_uuids": false, 00:13:08.250 "transport_tos": 0, 00:13:08.250 "nvme_error_stat": false, 00:13:08.250 "rdma_srq_size": 0, 00:13:08.250 "io_path_stat": false, 00:13:08.250 "allow_accel_sequence": false, 00:13:08.250 "rdma_max_cq_size": 0, 00:13:08.250 "rdma_cm_event_timeout_ms": 0, 00:13:08.250 "dhchap_digests": [ 00:13:08.250 "sha256", 00:13:08.250 "sha384", 00:13:08.250 "sha512" 00:13:08.250 ], 00:13:08.250 "dhchap_dhgroups": [ 00:13:08.250 "null", 00:13:08.250 "ffdhe2048", 00:13:08.250 "ffdhe3072", 00:13:08.250 "ffdhe4096", 00:13:08.250 "ffdhe6144", 00:13:08.250 "ffdhe8192" 00:13:08.250 ] 00:13:08.250 } 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "method": "bdev_nvme_set_hotplug", 00:13:08.250 "params": { 00:13:08.250 "period_us": 100000, 00:13:08.250 "enable": false 00:13:08.250 } 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "method": "bdev_malloc_create", 00:13:08.250 "params": { 00:13:08.250 "name": "malloc0", 00:13:08.250 "num_blocks": 8192, 00:13:08.250 "block_size": 4096, 00:13:08.250 "physical_block_size": 4096, 00:13:08.250 "uuid": "0bbc6340-e56c-4b5f-89c5-0abd5f0c0cf7", 00:13:08.250 "optimal_io_boundary": 0, 00:13:08.250 "md_size": 0, 00:13:08.250 "dif_type": 0, 00:13:08.250 "dif_is_head_of_md": false, 00:13:08.250 "dif_pi_format": 0 00:13:08.250 } 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "method": "bdev_wait_for_examine" 00:13:08.250 } 00:13:08.250 ] 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "subsystem": "scsi", 00:13:08.250 "config": null 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "subsystem": "scheduler", 00:13:08.250 "config": [ 00:13:08.250 { 00:13:08.250 "method": "framework_set_scheduler", 00:13:08.250 "params": { 00:13:08.250 "name": "static" 00:13:08.250 } 00:13:08.250 } 00:13:08.250 ] 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "subsystem": "vhost_scsi", 00:13:08.250 "config": [] 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "subsystem": "vhost_blk", 00:13:08.250 "config": [] 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "subsystem": "ublk", 00:13:08.250 "config": [ 00:13:08.250 { 00:13:08.250 "method": "ublk_create_target", 00:13:08.250 "params": { 00:13:08.250 "cpumask": "1" 00:13:08.250 } 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "method": "ublk_start_disk", 00:13:08.250 "params": { 00:13:08.250 "bdev_name": "malloc0", 00:13:08.250 "ublk_id": 0, 00:13:08.250 "num_queues": 1, 00:13:08.250 "queue_depth": 128 00:13:08.250 } 00:13:08.250 } 00:13:08.250 ] 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "subsystem": "nbd", 00:13:08.250 "config": [] 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "subsystem": "nvmf", 00:13:08.250 "config": [ 00:13:08.250 { 00:13:08.250 "method": "nvmf_set_config", 00:13:08.250 "params": { 00:13:08.250 "discovery_filter": "match_any", 00:13:08.250 "admin_cmd_passthru": { 00:13:08.250 "identify_ctrlr": false 00:13:08.250 }, 00:13:08.250 "dhchap_digests": [ 00:13:08.250 "sha256", 00:13:08.250 "sha384", 00:13:08.250 "sha512" 00:13:08.250 ], 00:13:08.250 "dhchap_dhgroups": [ 00:13:08.250 "null", 00:13:08.250 "ffdhe2048", 00:13:08.250 "ffdhe3072", 00:13:08.250 "ffdhe4096", 00:13:08.250 "ffdhe6144", 00:13:08.250 "ffdhe8192" 00:13:08.250 ] 00:13:08.250 } 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "method": "nvmf_set_max_subsystems", 00:13:08.250 "params": { 00:13:08.250 "max_subsystems": 1024 00:13:08.250 } 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "method": "nvmf_set_crdt", 00:13:08.250 "params": { 00:13:08.250 "crdt1": 0, 00:13:08.250 "crdt2": 0, 00:13:08.250 "crdt3": 0 00:13:08.250 } 00:13:08.250 } 00:13:08.250 ] 00:13:08.250 }, 00:13:08.250 { 00:13:08.250 "subsystem": "iscsi", 00:13:08.250 "config": [ 00:13:08.250 { 00:13:08.250 "method": "iscsi_set_options", 00:13:08.250 "params": { 00:13:08.250 "node_base": "iqn.2016-06.io.spdk", 00:13:08.250 "max_sessions": 128, 00:13:08.250 "max_connections_per_session": 2, 00:13:08.250 "max_queue_depth": 64, 00:13:08.250 "default_time2wait": 2, 00:13:08.250 "default_time2retain": 20, 00:13:08.250 "first_burst_length": 8192, 00:13:08.250 "immediate_data": true, 00:13:08.250 "allow_duplicated_isid": false, 00:13:08.250 "error_recovery_level": 0, 00:13:08.250 "nop_timeout": 60, 00:13:08.250 "nop_in_interval": 30, 00:13:08.250 "disable_chap": false, 00:13:08.250 "require_chap": false, 00:13:08.250 "mutual_chap": false, 00:13:08.250 "chap_group": 0, 00:13:08.250 "max_large_datain_per_connection": 64, 00:13:08.250 "max_r2t_per_connection": 4, 00:13:08.250 "pdu_pool_size": 36864, 00:13:08.250 "immediate_data_pool_size": 16384, 00:13:08.250 "data_out_pool_size": 2048 00:13:08.250 } 00:13:08.250 } 00:13:08.250 ] 00:13:08.250 } 00:13:08.250 ] 00:13:08.250 }' 00:13:08.250 [2024-09-28 01:26:03.993598] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:08.250 [2024-09-28 01:26:03.993723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71170 ] 00:13:08.250 [2024-09-28 01:26:04.139971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.508 [2024-09-28 01:26:04.295381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.074 [2024-09-28 01:26:04.918209] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:09.074 [2024-09-28 01:26:04.918848] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:09.074 [2024-09-28 01:26:04.926298] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:09.074 [2024-09-28 01:26:04.926358] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:09.074 [2024-09-28 01:26:04.926364] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:09.074 [2024-09-28 01:26:04.926369] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:09.074 [2024-09-28 01:26:04.934301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:09.074 [2024-09-28 01:26:04.934318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:09.074 [2024-09-28 01:26:04.942217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:09.074 [2024-09-28 01:26:04.942291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:09.074 [2024-09-28 01:26:04.959219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:09.074 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.074 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:09.074 01:26:05 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:09.074 01:26:05 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:09.075 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.075 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71170 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71170 ']' 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71170 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71170 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.332 killing process with pid 71170 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71170' 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71170 00:13:09.332 01:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71170 00:13:10.266 [2024-09-28 01:26:06.130130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:10.266 [2024-09-28 01:26:06.166277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:10.266 [2024-09-28 01:26:06.166374] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:10.266 [2024-09-28 01:26:06.174265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:10.266 [2024-09-28 01:26:06.174306] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:10.266 [2024-09-28 01:26:06.174312] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:10.266 [2024-09-28 01:26:06.174332] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:10.266 [2024-09-28 01:26:06.174437] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:11.641 01:26:07 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:11.641 00:13:11.641 real 0m7.357s 00:13:11.641 user 0m5.130s 00:13:11.641 sys 0m2.844s 00:13:11.641 01:26:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.641 01:26:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:11.641 ************************************ 00:13:11.641 END TEST test_save_ublk_config 00:13:11.641 ************************************ 00:13:11.641 01:26:07 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71248 00:13:11.641 01:26:07 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:11.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.641 01:26:07 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71248 00:13:11.641 01:26:07 ublk -- common/autotest_common.sh@831 -- # '[' -z 71248 ']' 00:13:11.641 01:26:07 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.641 01:26:07 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.641 01:26:07 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:11.641 01:26:07 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.641 01:26:07 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.641 01:26:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:11.641 [2024-09-28 01:26:07.531173] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:11.641 [2024-09-28 01:26:07.531284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71248 ] 00:13:11.900 [2024-09-28 01:26:07.666901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:11.900 [2024-09-28 01:26:07.812615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.900 [2024-09-28 01:26:07.812762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.467 01:26:08 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.467 01:26:08 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:12.467 01:26:08 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:12.467 01:26:08 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:12.467 01:26:08 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.467 01:26:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:12.467 ************************************ 00:13:12.467 START TEST test_create_ublk 00:13:12.467 ************************************ 00:13:12.467 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:12.467 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:12.467 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.467 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:12.467 [2024-09-28 01:26:08.390209] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:12.467 [2024-09-28 01:26:08.391409] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:12.467 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.467 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:12.467 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:12.467 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.467 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:12.726 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.726 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:12.726 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:12.726 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.726 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:12.726 [2024-09-28 01:26:08.549375] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:12.726 [2024-09-28 01:26:08.549671] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:12.726 [2024-09-28 01:26:08.549685] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:12.726 [2024-09-28 01:26:08.549692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:12.726 [2024-09-28 01:26:08.557435] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:12.726 [2024-09-28 01:26:08.557458] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:12.726 [2024-09-28 01:26:08.565224] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:12.726 [2024-09-28 01:26:08.565703] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:12.726 [2024-09-28 01:26:08.575217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:12.726 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.726 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:12.726 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:12.726 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:12.726 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.726 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:12.726 01:26:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.726 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:12.726 { 00:13:12.726 "ublk_device": "/dev/ublkb0", 00:13:12.726 "id": 0, 00:13:12.726 "queue_depth": 512, 00:13:12.726 "num_queues": 4, 00:13:12.726 "bdev_name": "Malloc0" 00:13:12.726 } 00:13:12.726 ]' 00:13:12.726 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:12.726 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:12.726 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:12.985 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:12.985 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:12.985 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:12.985 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:12.985 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:12.985 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:12.985 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:12.985 01:26:08 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:12.985 01:26:08 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:12.985 fio: verification read phase will never start because write phase uses all of runtime 00:13:12.985 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:12.985 fio-3.35 00:13:12.985 Starting 1 process 00:13:25.182 00:13:25.182 fio_test: (groupid=0, jobs=1): err= 0: pid=71287: Sat Sep 28 01:26:18 2024 00:13:25.182 write: IOPS=17.7k, BW=69.1MiB/s (72.5MB/s)(691MiB/10001msec); 0 zone resets 00:13:25.182 clat (usec): min=36, max=8954, avg=55.66, stdev=126.81 00:13:25.182 lat (usec): min=36, max=8955, avg=56.14, stdev=126.83 00:13:25.182 clat percentiles (usec): 00:13:25.182 | 1.00th=[ 41], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:13:25.182 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 50], 60.00th=[ 51], 00:13:25.182 | 70.00th=[ 52], 80.00th=[ 54], 90.00th=[ 58], 95.00th=[ 63], 00:13:25.182 | 99.00th=[ 73], 99.50th=[ 81], 99.90th=[ 2999], 99.95th=[ 3458], 00:13:25.182 | 99.99th=[ 3720] 00:13:25.182 bw ( KiB/s): min=32808, max=78528, per=99.70%, avg=70582.74, stdev=11929.70, samples=19 00:13:25.182 iops : min= 8202, max=19632, avg=17645.68, stdev=2982.43, samples=19 00:13:25.182 lat (usec) : 50=54.04%, 100=45.59%, 250=0.13%, 500=0.02%, 750=0.01% 00:13:25.182 lat (usec) : 1000=0.01% 00:13:25.182 lat (msec) : 2=0.05%, 4=0.14%, 10=0.01% 00:13:25.182 cpu : usr=3.34%, sys=13.90%, ctx=177025, majf=0, minf=798 00:13:25.182 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.182 issued rwts: total=0,177007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.182 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.182 00:13:25.182 Run status group 0 (all jobs): 00:13:25.182 WRITE: bw=69.1MiB/s (72.5MB/s), 69.1MiB/s-69.1MiB/s (72.5MB/s-72.5MB/s), io=691MiB (725MB), run=10001-10001msec 00:13:25.182 00:13:25.182 Disk stats (read/write): 00:13:25.182 ublkb0: ios=0/175040, merge=0/0, ticks=0/8273, in_queue=8273, util=99.10% 00:13:25.182 01:26:18 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:25.182 01:26:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.182 01:26:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.182 [2024-09-28 01:26:18.985855] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:25.182 [2024-09-28 01:26:19.023665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:25.182 [2024-09-28 01:26:19.024581] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:25.182 [2024-09-28 01:26:19.031224] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:25.182 [2024-09-28 01:26:19.031453] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:25.182 [2024-09-28 01:26:19.031467] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.182 01:26:19 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.182 [2024-09-28 01:26:19.047273] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:25.182 request: 00:13:25.182 { 00:13:25.182 "ublk_id": 0, 00:13:25.182 "method": "ublk_stop_disk", 00:13:25.182 "req_id": 1 00:13:25.182 } 00:13:25.182 Got JSON-RPC error response 00:13:25.182 response: 00:13:25.182 { 00:13:25.182 "code": -19, 00:13:25.182 "message": "No such device" 00:13:25.182 } 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.182 01:26:19 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.182 [2024-09-28 01:26:19.063270] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:25.182 [2024-09-28 01:26:19.065071] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:25.182 [2024-09-28 01:26:19.065102] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.182 01:26:19 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.182 01:26:19 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:25.182 01:26:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.182 01:26:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:25.182 01:26:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:25.182 01:26:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:25.182 01:26:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.182 01:26:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:25.182 01:26:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:25.182 01:26:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:25.182 00:13:25.182 real 0m11.137s 00:13:25.182 user 0m0.632s 00:13:25.182 sys 0m1.457s 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.182 01:26:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.182 ************************************ 00:13:25.182 END TEST test_create_ublk 00:13:25.182 ************************************ 00:13:25.182 01:26:19 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:25.182 01:26:19 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:25.182 01:26:19 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.182 01:26:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.182 ************************************ 00:13:25.182 START TEST test_create_multi_ublk 00:13:25.182 ************************************ 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.182 [2024-09-28 01:26:19.562206] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:25.182 [2024-09-28 01:26:19.563391] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.182 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 [2024-09-28 01:26:19.778314] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:25.183 [2024-09-28 01:26:19.778617] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:25.183 [2024-09-28 01:26:19.778630] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:25.183 [2024-09-28 01:26:19.778638] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:25.183 [2024-09-28 01:26:19.790419] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:25.183 [2024-09-28 01:26:19.790439] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:25.183 [2024-09-28 01:26:19.802217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:25.183 [2024-09-28 01:26:19.802705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:25.183 [2024-09-28 01:26:19.833214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:25.183 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 01:26:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:25.183 01:26:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.183 01:26:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:25.183 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 01:26:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 [2024-09-28 01:26:20.072321] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:25.183 [2024-09-28 01:26:20.072622] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:25.183 [2024-09-28 01:26:20.072636] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:25.183 [2024-09-28 01:26:20.072648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:25.183 [2024-09-28 01:26:20.084230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:25.183 [2024-09-28 01:26:20.084248] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:25.183 [2024-09-28 01:26:20.096220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:25.183 [2024-09-28 01:26:20.096721] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:25.183 [2024-09-28 01:26:20.121223] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 [2024-09-28 01:26:20.360314] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:25.183 [2024-09-28 01:26:20.360611] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:25.183 [2024-09-28 01:26:20.360623] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:25.183 [2024-09-28 01:26:20.360629] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:25.183 [2024-09-28 01:26:20.372236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:25.183 [2024-09-28 01:26:20.372258] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:25.183 [2024-09-28 01:26:20.384215] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:25.183 [2024-09-28 01:26:20.384708] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:25.183 [2024-09-28 01:26:20.388156] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 [2024-09-28 01:26:20.625317] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:25.183 [2024-09-28 01:26:20.625607] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:25.183 [2024-09-28 01:26:20.625620] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:25.183 [2024-09-28 01:26:20.625626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:25.183 [2024-09-28 01:26:20.637228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:25.183 [2024-09-28 01:26:20.637246] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:25.183 [2024-09-28 01:26:20.649222] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:25.183 [2024-09-28 01:26:20.649710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:25.183 [2024-09-28 01:26:20.685219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:25.183 { 00:13:25.183 "ublk_device": "/dev/ublkb0", 00:13:25.183 "id": 0, 00:13:25.183 "queue_depth": 512, 00:13:25.183 "num_queues": 4, 00:13:25.183 "bdev_name": "Malloc0" 00:13:25.183 }, 00:13:25.183 { 00:13:25.183 "ublk_device": "/dev/ublkb1", 00:13:25.183 "id": 1, 00:13:25.183 "queue_depth": 512, 00:13:25.183 "num_queues": 4, 00:13:25.183 "bdev_name": "Malloc1" 00:13:25.183 }, 00:13:25.183 { 00:13:25.183 "ublk_device": "/dev/ublkb2", 00:13:25.183 "id": 2, 00:13:25.183 "queue_depth": 512, 00:13:25.183 "num_queues": 4, 00:13:25.183 "bdev_name": "Malloc2" 00:13:25.183 }, 00:13:25.183 { 00:13:25.183 "ublk_device": "/dev/ublkb3", 00:13:25.183 "id": 3, 00:13:25.183 "queue_depth": 512, 00:13:25.183 "num_queues": 4, 00:13:25.183 "bdev_name": "Malloc3" 00:13:25.183 } 00:13:25.183 ]' 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:25.183 01:26:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:25.183 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:25.183 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:25.183 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:25.183 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.183 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:25.183 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:25.183 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.442 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.442 [2024-09-28 01:26:21.369285] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:25.701 [2024-09-28 01:26:21.396663] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:25.701 [2024-09-28 01:26:21.405658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:25.701 [2024-09-28 01:26:21.417214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:25.701 [2024-09-28 01:26:21.417449] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:25.701 [2024-09-28 01:26:21.417463] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.701 [2024-09-28 01:26:21.441265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:25.701 [2024-09-28 01:26:21.477253] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:25.701 [2024-09-28 01:26:21.477906] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:25.701 [2024-09-28 01:26:21.489282] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:25.701 [2024-09-28 01:26:21.489509] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:25.701 [2024-09-28 01:26:21.489518] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.701 [2024-09-28 01:26:21.513276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:25.701 [2024-09-28 01:26:21.561251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:25.701 [2024-09-28 01:26:21.561851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:25.701 [2024-09-28 01:26:21.571240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:25.701 [2024-09-28 01:26:21.571461] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:25.701 [2024-09-28 01:26:21.571474] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.701 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:25.701 [2024-09-28 01:26:21.597276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:25.959 [2024-09-28 01:26:21.633247] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:25.959 [2024-09-28 01:26:21.633834] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:25.959 [2024-09-28 01:26:21.645214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:25.959 [2024-09-28 01:26:21.645442] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:25.959 [2024-09-28 01:26:21.645456] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:25.959 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.959 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:25.959 [2024-09-28 01:26:21.849280] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:25.959 [2024-09-28 01:26:21.851036] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:25.959 [2024-09-28 01:26:21.851065] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:25.959 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:25.959 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:25.959 01:26:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:25.959 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.959 01:26:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.525 01:26:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.525 01:26:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.525 01:26:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:26.525 01:26:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.525 01:26:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.091 01:26:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.091 01:26:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.091 01:26:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:27.091 01:26:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.091 01:26:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.091 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.091 01:26:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:27.091 01:26:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:27.091 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.091 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:27.348 01:26:23 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:27.607 01:26:23 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:27.607 00:13:27.607 real 0m3.738s 00:13:27.607 user 0m0.812s 00:13:27.607 sys 0m0.156s 00:13:27.607 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.607 ************************************ 00:13:27.607 01:26:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.607 END TEST test_create_multi_ublk 00:13:27.607 ************************************ 00:13:27.607 01:26:23 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:27.607 01:26:23 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:27.607 01:26:23 ublk -- ublk/ublk.sh@130 -- # killprocess 71248 00:13:27.607 01:26:23 ublk -- common/autotest_common.sh@950 -- # '[' -z 71248 ']' 00:13:27.607 01:26:23 ublk -- common/autotest_common.sh@954 -- # kill -0 71248 00:13:27.607 01:26:23 ublk -- common/autotest_common.sh@955 -- # uname 00:13:27.607 01:26:23 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.607 01:26:23 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71248 00:13:27.607 01:26:23 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:27.607 01:26:23 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:27.607 killing process with pid 71248 00:13:27.607 01:26:23 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71248' 00:13:27.607 01:26:23 ublk -- common/autotest_common.sh@969 -- # kill 71248 00:13:27.607 01:26:23 ublk -- common/autotest_common.sh@974 -- # wait 71248 00:13:28.173 [2024-09-28 01:26:23.878504] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:28.173 [2024-09-28 01:26:23.878556] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:28.740 00:13:28.740 real 0m24.691s 00:13:28.740 user 0m35.125s 00:13:28.740 sys 0m9.980s 00:13:28.740 01:26:24 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.740 01:26:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:28.740 ************************************ 00:13:28.740 END TEST ublk 00:13:28.740 ************************************ 00:13:28.740 01:26:24 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:28.740 01:26:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:28.740 01:26:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.740 01:26:24 -- common/autotest_common.sh@10 -- # set +x 00:13:28.740 ************************************ 00:13:28.740 START TEST ublk_recovery 00:13:28.740 ************************************ 00:13:28.740 01:26:24 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:29.001 * Looking for test storage... 00:13:29.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:29.001 01:26:24 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:29.001 01:26:24 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:29.001 01:26:24 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:13:29.001 01:26:24 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.001 01:26:24 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:29.001 01:26:24 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.001 01:26:24 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:29.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.001 --rc genhtml_branch_coverage=1 00:13:29.001 --rc genhtml_function_coverage=1 00:13:29.001 --rc genhtml_legend=1 00:13:29.001 --rc geninfo_all_blocks=1 00:13:29.001 --rc geninfo_unexecuted_blocks=1 00:13:29.001 00:13:29.001 ' 00:13:29.001 01:26:24 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:29.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.001 --rc genhtml_branch_coverage=1 00:13:29.001 --rc genhtml_function_coverage=1 00:13:29.001 --rc genhtml_legend=1 00:13:29.001 --rc geninfo_all_blocks=1 00:13:29.001 --rc geninfo_unexecuted_blocks=1 00:13:29.001 00:13:29.001 ' 00:13:29.001 01:26:24 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:29.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.001 --rc genhtml_branch_coverage=1 00:13:29.001 --rc genhtml_function_coverage=1 00:13:29.001 --rc genhtml_legend=1 00:13:29.001 --rc geninfo_all_blocks=1 00:13:29.001 --rc geninfo_unexecuted_blocks=1 00:13:29.001 00:13:29.001 ' 00:13:29.001 01:26:24 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:29.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.001 --rc genhtml_branch_coverage=1 00:13:29.001 --rc genhtml_function_coverage=1 00:13:29.001 --rc genhtml_legend=1 00:13:29.001 --rc geninfo_all_blocks=1 00:13:29.001 --rc geninfo_unexecuted_blocks=1 00:13:29.001 00:13:29.001 ' 00:13:29.001 01:26:24 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:29.001 01:26:24 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:29.001 01:26:24 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:29.001 01:26:24 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:29.001 01:26:24 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:29.001 01:26:24 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:29.002 01:26:24 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:29.002 01:26:24 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:29.002 01:26:24 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:29.002 01:26:24 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:29.002 01:26:24 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71644 00:13:29.002 01:26:24 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:29.002 01:26:24 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71644 00:13:29.002 01:26:24 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71644 ']' 00:13:29.002 01:26:24 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.002 01:26:24 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.002 01:26:24 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.002 01:26:24 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.002 01:26:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.002 01:26:24 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:29.002 [2024-09-28 01:26:24.858153] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:29.002 [2024-09-28 01:26:24.858263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71644 ] 00:13:29.261 [2024-09-28 01:26:25.000292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.261 [2024-09-28 01:26:25.145746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.261 [2024-09-28 01:26:25.145837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.825 01:26:25 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.825 01:26:25 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:29.826 01:26:25 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:29.826 01:26:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.826 01:26:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.826 [2024-09-28 01:26:25.699211] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:29.826 [2024-09-28 01:26:25.700412] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:29.826 01:26:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.826 01:26:25 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:29.826 01:26:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.826 01:26:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.082 malloc0 00:13:30.082 01:26:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.082 01:26:25 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:30.082 01:26:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.082 01:26:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.082 [2024-09-28 01:26:25.786316] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:30.082 [2024-09-28 01:26:25.786394] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:30.082 [2024-09-28 01:26:25.786402] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:30.082 [2024-09-28 01:26:25.786408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:30.082 [2024-09-28 01:26:25.794225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:30.082 [2024-09-28 01:26:25.794243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:30.082 [2024-09-28 01:26:25.802218] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:30.082 [2024-09-28 01:26:25.802326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:30.082 [2024-09-28 01:26:25.826224] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:30.082 1 00:13:30.082 01:26:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.082 01:26:25 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:31.015 01:26:26 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71673 00:13:31.015 01:26:26 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:31.015 01:26:26 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:31.015 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:31.015 fio-3.35 00:13:31.015 Starting 1 process 00:13:36.279 01:26:31 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71644 00:13:36.279 01:26:31 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:41.589 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71644 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:41.589 01:26:36 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71789 00:13:41.589 01:26:36 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:41.589 01:26:36 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71789 00:13:41.589 01:26:36 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:41.589 01:26:36 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71789 ']' 00:13:41.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.589 01:26:36 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.589 01:26:36 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:41.589 01:26:36 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.589 01:26:36 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:41.589 01:26:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:41.589 [2024-09-28 01:26:36.928342] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:41.589 [2024-09-28 01:26:36.928469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71789 ] 00:13:41.589 [2024-09-28 01:26:37.079011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:41.589 [2024-09-28 01:26:37.310686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.589 [2024-09-28 01:26:37.310767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.161 01:26:38 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.161 01:26:38 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:42.161 01:26:38 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:42.161 01:26:38 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.161 01:26:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:42.161 [2024-09-28 01:26:38.079227] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:42.161 [2024-09-28 01:26:38.080981] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:42.161 01:26:38 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.161 01:26:38 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:42.161 01:26:38 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.161 01:26:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:42.422 malloc0 00:13:42.422 01:26:38 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.422 01:26:38 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:42.422 01:26:38 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.422 01:26:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:42.422 [2024-09-28 01:26:38.214452] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:42.422 [2024-09-28 01:26:38.214532] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:42.422 [2024-09-28 01:26:38.214544] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:42.422 1 00:13:42.422 01:26:38 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.422 01:26:38 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71673 00:13:42.422 [2024-09-28 01:26:38.223229] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:42.423 [2024-09-28 01:26:38.223268] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:43.364 [2024-09-28 01:26:39.223329] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:43.364 [2024-09-28 01:26:39.227224] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:43.364 [2024-09-28 01:26:39.227245] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:44.298 [2024-09-28 01:26:40.227276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:44.557 [2024-09-28 01:26:40.231209] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:44.557 [2024-09-28 01:26:40.231230] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:45.492 [2024-09-28 01:26:41.231259] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:45.492 [2024-09-28 01:26:41.235216] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:45.492 [2024-09-28 01:26:41.235225] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:45.492 [2024-09-28 01:26:41.235235] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:45.492 [2024-09-28 01:26:41.235323] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:07.414 [2024-09-28 01:27:01.904215] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:07.414 [2024-09-28 01:27:01.910709] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:07.414 [2024-09-28 01:27:01.918395] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:07.414 [2024-09-28 01:27:01.918413] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:33.946 00:14:33.946 fio_test: (groupid=0, jobs=1): err= 0: pid=71680: Sat Sep 28 01:27:27 2024 00:14:33.946 read: IOPS=14.8k, BW=57.9MiB/s (60.7MB/s)(3475MiB/60002msec) 00:14:33.946 slat (nsec): min=1049, max=2882.2k, avg=4979.20, stdev=5004.33 00:14:33.946 clat (usec): min=478, max=30088k, avg=4409.58, stdev=262994.01 00:14:33.946 lat (usec): min=481, max=30088k, avg=4414.56, stdev=262994.01 00:14:33.946 clat percentiles (usec): 00:14:33.946 | 1.00th=[ 1663], 5.00th=[ 1762], 10.00th=[ 1795], 20.00th=[ 1811], 00:14:33.946 | 30.00th=[ 1827], 40.00th=[ 1844], 50.00th=[ 1876], 60.00th=[ 1893], 00:14:33.946 | 70.00th=[ 1975], 80.00th=[ 2343], 90.00th=[ 2638], 95.00th=[ 3097], 00:14:33.946 | 99.00th=[ 5080], 99.50th=[ 5407], 99.90th=[ 7767], 99.95th=[12256], 00:14:33.946 | 99.99th=[13173] 00:14:33.946 bw ( KiB/s): min=16968, max=131904, per=100.00%, avg=117052.27, stdev=20774.20, samples=60 00:14:33.946 iops : min= 4242, max=32976, avg=29263.07, stdev=5193.55, samples=60 00:14:33.946 write: IOPS=14.8k, BW=57.8MiB/s (60.7MB/s)(3471MiB/60002msec); 0 zone resets 00:14:33.946 slat (nsec): min=1071, max=711808, avg=5012.05, stdev=1687.01 00:14:33.946 clat (usec): min=508, max=30088k, avg=4217.50, stdev=247216.01 00:14:33.946 lat (usec): min=512, max=30088k, avg=4222.51, stdev=247216.01 00:14:33.946 clat percentiles (usec): 00:14:33.946 | 1.00th=[ 1696], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1893], 00:14:33.946 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1958], 60.00th=[ 1975], 00:14:33.946 | 70.00th=[ 2040], 80.00th=[ 2442], 90.00th=[ 2704], 95.00th=[ 3032], 00:14:33.946 | 99.00th=[ 5080], 99.50th=[ 5473], 99.90th=[ 7832], 99.95th=[12256], 00:14:33.946 | 99.99th=[13173] 00:14:33.946 bw ( KiB/s): min=16752, max=131376, per=100.00%, avg=116862.67, stdev=20934.23, samples=60 00:14:33.946 iops : min= 4188, max=32844, avg=29215.67, stdev=5233.56, samples=60 00:14:33.946 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:14:33.946 lat (msec) : 2=68.01%, 4=29.15%, 10=2.74%, 20=0.06%, >=2000=0.01% 00:14:33.946 cpu : usr=3.36%, sys=15.16%, ctx=59694, majf=0, minf=15 00:14:33.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:33.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:33.946 issued rwts: total=889724,888474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:33.946 00:14:33.946 Run status group 0 (all jobs): 00:14:33.946 READ: bw=57.9MiB/s (60.7MB/s), 57.9MiB/s-57.9MiB/s (60.7MB/s-60.7MB/s), io=3475MiB (3644MB), run=60002-60002msec 00:14:33.946 WRITE: bw=57.8MiB/s (60.7MB/s), 57.8MiB/s-57.8MiB/s (60.7MB/s-60.7MB/s), io=3471MiB (3639MB), run=60002-60002msec 00:14:33.946 00:14:33.946 Disk stats (read/write): 00:14:33.946 ublkb1: ios=886902/885612, merge=0/0, ticks=3867866/3619193, in_queue=7487060, util=99.88% 00:14:33.946 01:27:27 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:33.946 01:27:27 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.946 01:27:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.946 [2024-09-28 01:27:27.095967] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:33.946 [2024-09-28 01:27:27.133326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:33.946 [2024-09-28 01:27:27.133480] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:33.946 [2024-09-28 01:27:27.140284] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:33.946 [2024-09-28 01:27:27.140383] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:33.946 [2024-09-28 01:27:27.140391] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:33.946 01:27:27 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.946 01:27:27 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:33.946 01:27:27 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.946 01:27:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.946 [2024-09-28 01:27:27.156298] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:33.946 [2024-09-28 01:27:27.162207] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:33.946 [2024-09-28 01:27:27.162246] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:33.946 01:27:27 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.947 01:27:27 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:33.947 01:27:27 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:33.947 01:27:27 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71789 00:14:33.947 01:27:27 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71789 ']' 00:14:33.947 01:27:27 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71789 00:14:33.947 01:27:27 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:14:33.947 01:27:27 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.947 01:27:27 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71789 00:14:33.947 01:27:27 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.947 01:27:27 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.947 killing process with pid 71789 00:14:33.947 01:27:27 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71789' 00:14:33.947 01:27:27 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71789 00:14:33.947 01:27:27 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71789 00:14:33.947 [2024-09-28 01:27:28.378962] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:33.947 [2024-09-28 01:27:28.379022] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:33.947 00:14:33.947 real 1m4.758s 00:14:33.947 user 1m46.635s 00:14:33.947 sys 0m23.118s 00:14:33.947 01:27:29 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.947 01:27:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.947 ************************************ 00:14:33.947 END TEST ublk_recovery 00:14:33.947 ************************************ 00:14:33.947 01:27:29 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@256 -- # timing_exit lib 00:14:33.947 01:27:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:33.947 01:27:29 -- common/autotest_common.sh@10 -- # set +x 00:14:33.947 01:27:29 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:14:33.947 01:27:29 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:33.947 01:27:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:33.947 01:27:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.947 01:27:29 -- common/autotest_common.sh@10 -- # set +x 00:14:33.947 ************************************ 00:14:33.947 START TEST ftl 00:14:33.947 ************************************ 00:14:33.947 01:27:29 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:33.947 * Looking for test storage... 00:14:33.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:33.947 01:27:29 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:33.947 01:27:29 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:14:33.947 01:27:29 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:33.947 01:27:29 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:33.947 01:27:29 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:33.947 01:27:29 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:33.947 01:27:29 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:33.947 01:27:29 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.947 01:27:29 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:14:33.947 01:27:29 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:14:33.947 01:27:29 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:14:33.947 01:27:29 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:14:33.947 01:27:29 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:14:33.947 01:27:29 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:14:33.947 01:27:29 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:33.947 01:27:29 ftl -- scripts/common.sh@344 -- # case "$op" in 00:14:33.947 01:27:29 ftl -- scripts/common.sh@345 -- # : 1 00:14:33.947 01:27:29 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:33.947 01:27:29 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.947 01:27:29 ftl -- scripts/common.sh@365 -- # decimal 1 00:14:33.947 01:27:29 ftl -- scripts/common.sh@353 -- # local d=1 00:14:33.947 01:27:29 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.947 01:27:29 ftl -- scripts/common.sh@355 -- # echo 1 00:14:33.947 01:27:29 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:14:33.947 01:27:29 ftl -- scripts/common.sh@366 -- # decimal 2 00:14:33.947 01:27:29 ftl -- scripts/common.sh@353 -- # local d=2 00:14:33.947 01:27:29 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.947 01:27:29 ftl -- scripts/common.sh@355 -- # echo 2 00:14:33.947 01:27:29 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:14:33.947 01:27:29 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.947 01:27:29 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:33.947 01:27:29 ftl -- scripts/common.sh@368 -- # return 0 00:14:33.947 01:27:29 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.947 01:27:29 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:33.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.947 --rc genhtml_branch_coverage=1 00:14:33.947 --rc genhtml_function_coverage=1 00:14:33.947 --rc genhtml_legend=1 00:14:33.947 --rc geninfo_all_blocks=1 00:14:33.947 --rc geninfo_unexecuted_blocks=1 00:14:33.947 00:14:33.947 ' 00:14:33.947 01:27:29 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:33.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.947 --rc genhtml_branch_coverage=1 00:14:33.947 --rc genhtml_function_coverage=1 00:14:33.947 --rc genhtml_legend=1 00:14:33.947 --rc geninfo_all_blocks=1 00:14:33.947 --rc geninfo_unexecuted_blocks=1 00:14:33.947 00:14:33.947 ' 00:14:33.947 01:27:29 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:33.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.947 --rc genhtml_branch_coverage=1 00:14:33.947 --rc genhtml_function_coverage=1 00:14:33.947 --rc genhtml_legend=1 00:14:33.947 --rc geninfo_all_blocks=1 00:14:33.947 --rc geninfo_unexecuted_blocks=1 00:14:33.947 00:14:33.947 ' 00:14:33.947 01:27:29 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:33.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.947 --rc genhtml_branch_coverage=1 00:14:33.947 --rc genhtml_function_coverage=1 00:14:33.947 --rc genhtml_legend=1 00:14:33.947 --rc geninfo_all_blocks=1 00:14:33.947 --rc geninfo_unexecuted_blocks=1 00:14:33.947 00:14:33.947 ' 00:14:33.947 01:27:29 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:33.947 01:27:29 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:33.947 01:27:29 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:33.947 01:27:29 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:33.947 01:27:29 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:33.947 01:27:29 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:33.947 01:27:29 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:33.947 01:27:29 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:33.947 01:27:29 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:33.947 01:27:29 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:33.947 01:27:29 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:33.947 01:27:29 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:33.947 01:27:29 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:33.947 01:27:29 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:33.947 01:27:29 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:33.947 01:27:29 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:33.947 01:27:29 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:33.947 01:27:29 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:33.947 01:27:29 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:33.947 01:27:29 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:33.947 01:27:29 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:33.947 01:27:29 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:33.947 01:27:29 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:33.947 01:27:29 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:33.947 01:27:29 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:33.947 01:27:29 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:33.947 01:27:29 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:33.947 01:27:29 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.947 01:27:29 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.947 01:27:29 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:33.947 01:27:29 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:33.947 01:27:29 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:33.947 01:27:29 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:33.947 01:27:29 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:33.947 01:27:29 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:34.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:34.206 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:34.206 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:34.206 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:34.206 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:34.206 01:27:30 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72595 00:14:34.206 01:27:30 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72595 00:14:34.206 01:27:30 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:34.206 01:27:30 ftl -- common/autotest_common.sh@831 -- # '[' -z 72595 ']' 00:14:34.206 01:27:30 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.206 01:27:30 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.206 01:27:30 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.206 01:27:30 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.206 01:27:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:34.467 [2024-09-28 01:27:30.163496] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:34.467 [2024-09-28 01:27:30.163590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72595 ] 00:14:34.467 [2024-09-28 01:27:30.309504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.728 [2024-09-28 01:27:30.487884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.298 01:27:30 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.298 01:27:30 ftl -- common/autotest_common.sh@864 -- # return 0 00:14:35.298 01:27:30 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:35.298 01:27:31 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:36.237 01:27:32 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:36.237 01:27:32 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@50 -- # break 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:36.807 01:27:32 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:37.068 01:27:32 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:14:37.068 01:27:32 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:37.068 01:27:32 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:14:37.068 01:27:32 ftl -- ftl/ftl.sh@63 -- # break 00:14:37.068 01:27:32 ftl -- ftl/ftl.sh@66 -- # killprocess 72595 00:14:37.068 01:27:32 ftl -- common/autotest_common.sh@950 -- # '[' -z 72595 ']' 00:14:37.068 01:27:32 ftl -- common/autotest_common.sh@954 -- # kill -0 72595 00:14:37.068 01:27:32 ftl -- common/autotest_common.sh@955 -- # uname 00:14:37.068 01:27:32 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.068 01:27:32 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72595 00:14:37.068 killing process with pid 72595 00:14:37.068 01:27:32 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:37.068 01:27:32 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:37.068 01:27:32 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72595' 00:14:37.068 01:27:32 ftl -- common/autotest_common.sh@969 -- # kill 72595 00:14:37.068 01:27:32 ftl -- common/autotest_common.sh@974 -- # wait 72595 00:14:39.000 01:27:34 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:14:39.000 01:27:34 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:39.000 01:27:34 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:39.000 01:27:34 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.000 01:27:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:39.000 ************************************ 00:14:39.001 START TEST ftl_fio_basic 00:14:39.001 ************************************ 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:39.001 * Looking for test storage... 00:14:39.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:39.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.001 --rc genhtml_branch_coverage=1 00:14:39.001 --rc genhtml_function_coverage=1 00:14:39.001 --rc genhtml_legend=1 00:14:39.001 --rc geninfo_all_blocks=1 00:14:39.001 --rc geninfo_unexecuted_blocks=1 00:14:39.001 00:14:39.001 ' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:39.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.001 --rc genhtml_branch_coverage=1 00:14:39.001 --rc genhtml_function_coverage=1 00:14:39.001 --rc genhtml_legend=1 00:14:39.001 --rc geninfo_all_blocks=1 00:14:39.001 --rc geninfo_unexecuted_blocks=1 00:14:39.001 00:14:39.001 ' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:39.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.001 --rc genhtml_branch_coverage=1 00:14:39.001 --rc genhtml_function_coverage=1 00:14:39.001 --rc genhtml_legend=1 00:14:39.001 --rc geninfo_all_blocks=1 00:14:39.001 --rc geninfo_unexecuted_blocks=1 00:14:39.001 00:14:39.001 ' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:39.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.001 --rc genhtml_branch_coverage=1 00:14:39.001 --rc genhtml_function_coverage=1 00:14:39.001 --rc genhtml_legend=1 00:14:39.001 --rc geninfo_all_blocks=1 00:14:39.001 --rc geninfo_unexecuted_blocks=1 00:14:39.001 00:14:39.001 ' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72732 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72732 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72732 ']' 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.001 01:27:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:39.001 [2024-09-28 01:27:34.769278] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:39.001 [2024-09-28 01:27:34.769624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72732 ] 00:14:39.359 [2024-09-28 01:27:34.922931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:39.359 [2024-09-28 01:27:35.105117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.359 [2024-09-28 01:27:35.105457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.359 [2024-09-28 01:27:35.105389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.939 01:27:35 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.939 01:27:35 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:14:39.939 01:27:35 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:14:39.939 01:27:35 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:14:39.939 01:27:35 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:14:39.939 01:27:35 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:14:39.939 01:27:35 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:14:39.939 01:27:35 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:14:40.200 01:27:35 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:40.200 01:27:35 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:14:40.200 01:27:35 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:40.200 01:27:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:14:40.200 01:27:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:40.200 01:27:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:40.200 01:27:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:40.200 01:27:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:40.461 { 00:14:40.461 "name": "nvme0n1", 00:14:40.461 "aliases": [ 00:14:40.461 "ea16799c-7684-4e0d-a763-fafe864a96fa" 00:14:40.461 ], 00:14:40.461 "product_name": "NVMe disk", 00:14:40.461 "block_size": 4096, 00:14:40.461 "num_blocks": 1310720, 00:14:40.461 "uuid": "ea16799c-7684-4e0d-a763-fafe864a96fa", 00:14:40.461 "numa_id": -1, 00:14:40.461 "assigned_rate_limits": { 00:14:40.461 "rw_ios_per_sec": 0, 00:14:40.461 "rw_mbytes_per_sec": 0, 00:14:40.461 "r_mbytes_per_sec": 0, 00:14:40.461 "w_mbytes_per_sec": 0 00:14:40.461 }, 00:14:40.461 "claimed": false, 00:14:40.461 "zoned": false, 00:14:40.461 "supported_io_types": { 00:14:40.461 "read": true, 00:14:40.461 "write": true, 00:14:40.461 "unmap": true, 00:14:40.461 "flush": true, 00:14:40.461 "reset": true, 00:14:40.461 "nvme_admin": true, 00:14:40.461 "nvme_io": true, 00:14:40.461 "nvme_io_md": false, 00:14:40.461 "write_zeroes": true, 00:14:40.461 "zcopy": false, 00:14:40.461 "get_zone_info": false, 00:14:40.461 "zone_management": false, 00:14:40.461 "zone_append": false, 00:14:40.461 "compare": true, 00:14:40.461 "compare_and_write": false, 00:14:40.461 "abort": true, 00:14:40.461 "seek_hole": false, 00:14:40.461 "seek_data": false, 00:14:40.461 "copy": true, 00:14:40.461 "nvme_iov_md": false 00:14:40.461 }, 00:14:40.461 "driver_specific": { 00:14:40.461 "nvme": [ 00:14:40.461 { 00:14:40.461 "pci_address": "0000:00:11.0", 00:14:40.461 "trid": { 00:14:40.461 "trtype": "PCIe", 00:14:40.461 "traddr": "0000:00:11.0" 00:14:40.461 }, 00:14:40.461 "ctrlr_data": { 00:14:40.461 "cntlid": 0, 00:14:40.461 "vendor_id": "0x1b36", 00:14:40.461 "model_number": "QEMU NVMe Ctrl", 00:14:40.461 "serial_number": "12341", 00:14:40.461 "firmware_revision": "8.0.0", 00:14:40.461 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:40.461 "oacs": { 00:14:40.461 "security": 0, 00:14:40.461 "format": 1, 00:14:40.461 "firmware": 0, 00:14:40.461 "ns_manage": 1 00:14:40.461 }, 00:14:40.461 "multi_ctrlr": false, 00:14:40.461 "ana_reporting": false 00:14:40.461 }, 00:14:40.461 "vs": { 00:14:40.461 "nvme_version": "1.4" 00:14:40.461 }, 00:14:40.461 "ns_data": { 00:14:40.461 "id": 1, 00:14:40.461 "can_share": false 00:14:40.461 } 00:14:40.461 } 00:14:40.461 ], 00:14:40.461 "mp_policy": "active_passive" 00:14:40.461 } 00:14:40.461 } 00:14:40.461 ]' 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:40.461 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:40.723 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:14:40.723 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:40.723 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=1cd41022-1bdb-4932-8573-b1c2004b2978 00:14:40.723 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1cd41022-1bdb-4932-8573-b1c2004b2978 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:40.984 01:27:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:41.245 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:41.245 { 00:14:41.245 "name": "e06ee12d-23ba-4c58-9cb8-5b663d5c5873", 00:14:41.245 "aliases": [ 00:14:41.245 "lvs/nvme0n1p0" 00:14:41.245 ], 00:14:41.245 "product_name": "Logical Volume", 00:14:41.245 "block_size": 4096, 00:14:41.245 "num_blocks": 26476544, 00:14:41.245 "uuid": "e06ee12d-23ba-4c58-9cb8-5b663d5c5873", 00:14:41.245 "assigned_rate_limits": { 00:14:41.245 "rw_ios_per_sec": 0, 00:14:41.245 "rw_mbytes_per_sec": 0, 00:14:41.245 "r_mbytes_per_sec": 0, 00:14:41.245 "w_mbytes_per_sec": 0 00:14:41.245 }, 00:14:41.245 "claimed": false, 00:14:41.245 "zoned": false, 00:14:41.245 "supported_io_types": { 00:14:41.245 "read": true, 00:14:41.245 "write": true, 00:14:41.245 "unmap": true, 00:14:41.245 "flush": false, 00:14:41.245 "reset": true, 00:14:41.245 "nvme_admin": false, 00:14:41.245 "nvme_io": false, 00:14:41.245 "nvme_io_md": false, 00:14:41.245 "write_zeroes": true, 00:14:41.245 "zcopy": false, 00:14:41.245 "get_zone_info": false, 00:14:41.245 "zone_management": false, 00:14:41.245 "zone_append": false, 00:14:41.245 "compare": false, 00:14:41.245 "compare_and_write": false, 00:14:41.246 "abort": false, 00:14:41.246 "seek_hole": true, 00:14:41.246 "seek_data": true, 00:14:41.246 "copy": false, 00:14:41.246 "nvme_iov_md": false 00:14:41.246 }, 00:14:41.246 "driver_specific": { 00:14:41.246 "lvol": { 00:14:41.246 "lvol_store_uuid": "1cd41022-1bdb-4932-8573-b1c2004b2978", 00:14:41.246 "base_bdev": "nvme0n1", 00:14:41.246 "thin_provision": true, 00:14:41.246 "num_allocated_clusters": 0, 00:14:41.246 "snapshot": false, 00:14:41.246 "clone": false, 00:14:41.246 "esnap_clone": false 00:14:41.246 } 00:14:41.246 } 00:14:41.246 } 00:14:41.246 ]' 00:14:41.246 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:41.246 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:41.246 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:41.246 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:41.246 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:41.246 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:41.246 01:27:37 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:14:41.246 01:27:37 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:14:41.246 01:27:37 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:14:41.505 01:27:37 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:41.505 01:27:37 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:41.505 01:27:37 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:41.505 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:41.505 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:41.506 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:41.506 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:41.506 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:41.764 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:41.764 { 00:14:41.764 "name": "e06ee12d-23ba-4c58-9cb8-5b663d5c5873", 00:14:41.764 "aliases": [ 00:14:41.764 "lvs/nvme0n1p0" 00:14:41.764 ], 00:14:41.764 "product_name": "Logical Volume", 00:14:41.764 "block_size": 4096, 00:14:41.764 "num_blocks": 26476544, 00:14:41.764 "uuid": "e06ee12d-23ba-4c58-9cb8-5b663d5c5873", 00:14:41.764 "assigned_rate_limits": { 00:14:41.764 "rw_ios_per_sec": 0, 00:14:41.764 "rw_mbytes_per_sec": 0, 00:14:41.764 "r_mbytes_per_sec": 0, 00:14:41.764 "w_mbytes_per_sec": 0 00:14:41.764 }, 00:14:41.764 "claimed": false, 00:14:41.764 "zoned": false, 00:14:41.764 "supported_io_types": { 00:14:41.764 "read": true, 00:14:41.764 "write": true, 00:14:41.764 "unmap": true, 00:14:41.764 "flush": false, 00:14:41.764 "reset": true, 00:14:41.764 "nvme_admin": false, 00:14:41.764 "nvme_io": false, 00:14:41.764 "nvme_io_md": false, 00:14:41.764 "write_zeroes": true, 00:14:41.764 "zcopy": false, 00:14:41.764 "get_zone_info": false, 00:14:41.764 "zone_management": false, 00:14:41.764 "zone_append": false, 00:14:41.764 "compare": false, 00:14:41.764 "compare_and_write": false, 00:14:41.764 "abort": false, 00:14:41.764 "seek_hole": true, 00:14:41.764 "seek_data": true, 00:14:41.764 "copy": false, 00:14:41.764 "nvme_iov_md": false 00:14:41.764 }, 00:14:41.764 "driver_specific": { 00:14:41.764 "lvol": { 00:14:41.764 "lvol_store_uuid": "1cd41022-1bdb-4932-8573-b1c2004b2978", 00:14:41.764 "base_bdev": "nvme0n1", 00:14:41.764 "thin_provision": true, 00:14:41.764 "num_allocated_clusters": 0, 00:14:41.764 "snapshot": false, 00:14:41.764 "clone": false, 00:14:41.764 "esnap_clone": false 00:14:41.764 } 00:14:41.764 } 00:14:41.764 } 00:14:41.764 ]' 00:14:41.764 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:41.764 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:41.764 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:41.764 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:41.764 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:41.764 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:41.764 01:27:37 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:14:41.764 01:27:37 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:42.022 01:27:37 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:42.022 01:27:37 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:42.022 01:27:37 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:42.022 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:42.022 01:27:37 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:42.022 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:42.022 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:42.022 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:42.022 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:42.022 01:27:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e06ee12d-23ba-4c58-9cb8-5b663d5c5873 00:14:42.281 01:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:42.281 { 00:14:42.281 "name": "e06ee12d-23ba-4c58-9cb8-5b663d5c5873", 00:14:42.281 "aliases": [ 00:14:42.281 "lvs/nvme0n1p0" 00:14:42.281 ], 00:14:42.281 "product_name": "Logical Volume", 00:14:42.281 "block_size": 4096, 00:14:42.281 "num_blocks": 26476544, 00:14:42.281 "uuid": "e06ee12d-23ba-4c58-9cb8-5b663d5c5873", 00:14:42.281 "assigned_rate_limits": { 00:14:42.281 "rw_ios_per_sec": 0, 00:14:42.281 "rw_mbytes_per_sec": 0, 00:14:42.281 "r_mbytes_per_sec": 0, 00:14:42.281 "w_mbytes_per_sec": 0 00:14:42.281 }, 00:14:42.281 "claimed": false, 00:14:42.281 "zoned": false, 00:14:42.281 "supported_io_types": { 00:14:42.281 "read": true, 00:14:42.281 "write": true, 00:14:42.281 "unmap": true, 00:14:42.281 "flush": false, 00:14:42.281 "reset": true, 00:14:42.281 "nvme_admin": false, 00:14:42.281 "nvme_io": false, 00:14:42.281 "nvme_io_md": false, 00:14:42.281 "write_zeroes": true, 00:14:42.281 "zcopy": false, 00:14:42.281 "get_zone_info": false, 00:14:42.281 "zone_management": false, 00:14:42.281 "zone_append": false, 00:14:42.281 "compare": false, 00:14:42.281 "compare_and_write": false, 00:14:42.281 "abort": false, 00:14:42.281 "seek_hole": true, 00:14:42.281 "seek_data": true, 00:14:42.281 "copy": false, 00:14:42.281 "nvme_iov_md": false 00:14:42.281 }, 00:14:42.281 "driver_specific": { 00:14:42.281 "lvol": { 00:14:42.281 "lvol_store_uuid": "1cd41022-1bdb-4932-8573-b1c2004b2978", 00:14:42.281 "base_bdev": "nvme0n1", 00:14:42.281 "thin_provision": true, 00:14:42.281 "num_allocated_clusters": 0, 00:14:42.281 "snapshot": false, 00:14:42.281 "clone": false, 00:14:42.281 "esnap_clone": false 00:14:42.281 } 00:14:42.281 } 00:14:42.281 } 00:14:42.281 ]' 00:14:42.281 01:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:42.281 01:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:42.281 01:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:42.281 01:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:42.281 01:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:42.281 01:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:42.281 01:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:42.281 01:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:42.281 01:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e06ee12d-23ba-4c58-9cb8-5b663d5c5873 -c nvc0n1p0 --l2p_dram_limit 60 00:14:42.540 [2024-09-28 01:27:38.291680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.291717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:42.540 [2024-09-28 01:27:38.291731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:42.540 [2024-09-28 01:27:38.291738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.540 [2024-09-28 01:27:38.291785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.291793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:42.540 [2024-09-28 01:27:38.291801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:14:42.540 [2024-09-28 01:27:38.291807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.540 [2024-09-28 01:27:38.291833] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:42.540 [2024-09-28 01:27:38.292409] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:42.540 [2024-09-28 01:27:38.292430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.292437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:42.540 [2024-09-28 01:27:38.292445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:14:42.540 [2024-09-28 01:27:38.292451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.540 [2024-09-28 01:27:38.292481] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 953f3a8b-1331-4313-91a0-41b5ca33b659 00:14:42.540 [2024-09-28 01:27:38.293458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.293484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:42.540 [2024-09-28 01:27:38.293492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:14:42.540 [2024-09-28 01:27:38.293500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.540 [2024-09-28 01:27:38.298307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.298396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:42.540 [2024-09-28 01:27:38.298443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:14:42.540 [2024-09-28 01:27:38.298464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.540 [2024-09-28 01:27:38.298557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.298876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:42.540 [2024-09-28 01:27:38.298946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:14:42.540 [2024-09-28 01:27:38.299005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.540 [2024-09-28 01:27:38.299074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.299126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:42.540 [2024-09-28 01:27:38.299201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:14:42.540 [2024-09-28 01:27:38.299226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.540 [2024-09-28 01:27:38.299285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:42.540 [2024-09-28 01:27:38.302224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.302305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:42.540 [2024-09-28 01:27:38.302366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:14:42.540 [2024-09-28 01:27:38.302384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.540 [2024-09-28 01:27:38.302425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.302521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:42.540 [2024-09-28 01:27:38.302541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:42.540 [2024-09-28 01:27:38.302557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.540 [2024-09-28 01:27:38.302593] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:42.540 [2024-09-28 01:27:38.302723] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:14:42.540 [2024-09-28 01:27:38.302759] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:42.540 [2024-09-28 01:27:38.302788] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:14:42.540 [2024-09-28 01:27:38.302864] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:42.540 [2024-09-28 01:27:38.302927] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:42.540 [2024-09-28 01:27:38.302956] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:42.540 [2024-09-28 01:27:38.302972] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:42.540 [2024-09-28 01:27:38.302988] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:14:42.540 [2024-09-28 01:27:38.303038] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:14:42.540 [2024-09-28 01:27:38.303058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.303074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:42.540 [2024-09-28 01:27:38.303092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:14:42.540 [2024-09-28 01:27:38.303111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.540 [2024-09-28 01:27:38.303212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.540 [2024-09-28 01:27:38.303251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:42.540 [2024-09-28 01:27:38.303270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:14:42.541 [2024-09-28 01:27:38.303285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.541 [2024-09-28 01:27:38.303377] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:42.541 [2024-09-28 01:27:38.303397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:42.541 [2024-09-28 01:27:38.303452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:42.541 [2024-09-28 01:27:38.303468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:42.541 [2024-09-28 01:27:38.303485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:42.541 [2024-09-28 01:27:38.303499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:42.541 [2024-09-28 01:27:38.303554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:42.541 [2024-09-28 01:27:38.303572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:42.541 [2024-09-28 01:27:38.303588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:42.541 [2024-09-28 01:27:38.303602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:42.541 [2024-09-28 01:27:38.303650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:42.541 [2024-09-28 01:27:38.303667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:42.541 [2024-09-28 01:27:38.303683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:42.541 [2024-09-28 01:27:38.303697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:42.541 [2024-09-28 01:27:38.303713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:14:42.541 [2024-09-28 01:27:38.303756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:42.541 [2024-09-28 01:27:38.303777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:42.541 [2024-09-28 01:27:38.303794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:14:42.541 [2024-09-28 01:27:38.303812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:42.541 [2024-09-28 01:27:38.303891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:42.541 [2024-09-28 01:27:38.303913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:42.541 [2024-09-28 01:27:38.303928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:42.541 [2024-09-28 01:27:38.303943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:42.541 [2024-09-28 01:27:38.303957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:42.541 [2024-09-28 01:27:38.303973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:42.541 [2024-09-28 01:27:38.303987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:42.541 [2024-09-28 01:27:38.304002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:42.541 [2024-09-28 01:27:38.304016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:42.541 [2024-09-28 01:27:38.304063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:42.541 [2024-09-28 01:27:38.304083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:14:42.541 [2024-09-28 01:27:38.304099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:42.541 [2024-09-28 01:27:38.304113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:42.541 [2024-09-28 01:27:38.304130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:14:42.541 [2024-09-28 01:27:38.304172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:42.541 [2024-09-28 01:27:38.304188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:42.541 [2024-09-28 01:27:38.304297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:14:42.541 [2024-09-28 01:27:38.304342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:42.541 [2024-09-28 01:27:38.304358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:14:42.541 [2024-09-28 01:27:38.304375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:14:42.541 [2024-09-28 01:27:38.304418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:42.541 [2024-09-28 01:27:38.304473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:14:42.541 [2024-09-28 01:27:38.304494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:14:42.541 [2024-09-28 01:27:38.304532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:42.541 [2024-09-28 01:27:38.304552] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:42.541 [2024-09-28 01:27:38.304600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:42.541 [2024-09-28 01:27:38.304618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:42.541 [2024-09-28 01:27:38.304649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:42.541 [2024-09-28 01:27:38.304667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:42.541 [2024-09-28 01:27:38.304684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:42.541 [2024-09-28 01:27:38.304698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:42.541 [2024-09-28 01:27:38.304744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:42.541 [2024-09-28 01:27:38.304761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:42.541 [2024-09-28 01:27:38.304807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:42.541 [2024-09-28 01:27:38.304825] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:42.541 [2024-09-28 01:27:38.304855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:42.541 [2024-09-28 01:27:38.304907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:42.541 [2024-09-28 01:27:38.304942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:14:42.541 [2024-09-28 01:27:38.304968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:14:42.541 [2024-09-28 01:27:38.304991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:14:42.541 [2024-09-28 01:27:38.305016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:14:42.541 [2024-09-28 01:27:38.305069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:14:42.541 [2024-09-28 01:27:38.305302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:14:42.541 [2024-09-28 01:27:38.305333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:14:42.541 [2024-09-28 01:27:38.305383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:14:42.541 [2024-09-28 01:27:38.305414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:14:42.541 [2024-09-28 01:27:38.305459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:14:42.541 [2024-09-28 01:27:38.305508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:14:42.541 [2024-09-28 01:27:38.305535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:14:42.541 [2024-09-28 01:27:38.305581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:14:42.541 [2024-09-28 01:27:38.305628] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:42.541 [2024-09-28 01:27:38.305658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:42.541 [2024-09-28 01:27:38.305703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:42.541 [2024-09-28 01:27:38.305728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:42.541 [2024-09-28 01:27:38.305775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:42.541 [2024-09-28 01:27:38.305822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:42.541 [2024-09-28 01:27:38.305849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:42.541 [2024-09-28 01:27:38.305866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:42.541 [2024-09-28 01:27:38.305881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.527 ms 00:14:42.542 [2024-09-28 01:27:38.305944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:42.542 [2024-09-28 01:27:38.306009] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:14:42.542 [2024-09-28 01:27:38.306072] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:14:45.110 [2024-09-28 01:27:40.393737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.393947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:45.110 [2024-09-28 01:27:40.394018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2087.718 ms 00:14:45.110 [2024-09-28 01:27:40.394046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.431374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.431606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:45.110 [2024-09-28 01:27:40.431914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.099 ms 00:14:45.110 [2024-09-28 01:27:40.432218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.432593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.432651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:45.110 [2024-09-28 01:27:40.432686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:14:45.110 [2024-09-28 01:27:40.432723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.472812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.472855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:45.110 [2024-09-28 01:27:40.472869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.918 ms 00:14:45.110 [2024-09-28 01:27:40.472883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.472926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.472939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:45.110 [2024-09-28 01:27:40.472950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:45.110 [2024-09-28 01:27:40.472959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.473334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.473359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:45.110 [2024-09-28 01:27:40.473369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:14:45.110 [2024-09-28 01:27:40.473378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.473502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.473520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:45.110 [2024-09-28 01:27:40.473529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:14:45.110 [2024-09-28 01:27:40.473540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.488728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.488885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:45.110 [2024-09-28 01:27:40.488969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.164 ms 00:14:45.110 [2024-09-28 01:27:40.489007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.501300] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:45.110 [2024-09-28 01:27:40.517297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.517477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:45.110 [2024-09-28 01:27:40.517581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.172 ms 00:14:45.110 [2024-09-28 01:27:40.517623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.570556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.570759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:45.110 [2024-09-28 01:27:40.570869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.851 ms 00:14:45.110 [2024-09-28 01:27:40.570915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.571286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.571393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:45.110 [2024-09-28 01:27:40.571467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:14:45.110 [2024-09-28 01:27:40.571506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.595437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.595559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:45.110 [2024-09-28 01:27:40.595620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.833 ms 00:14:45.110 [2024-09-28 01:27:40.595648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.619043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.619174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:45.110 [2024-09-28 01:27:40.619265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.094 ms 00:14:45.110 [2024-09-28 01:27:40.619292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.619873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.619970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:45.110 [2024-09-28 01:27:40.620038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:14:45.110 [2024-09-28 01:27:40.620065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.688663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.688823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:45.110 [2024-09-28 01:27:40.688894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.533 ms 00:14:45.110 [2024-09-28 01:27:40.688920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.713266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.713299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:45.110 [2024-09-28 01:27:40.713315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.248 ms 00:14:45.110 [2024-09-28 01:27:40.713325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.110 [2024-09-28 01:27:40.735952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.110 [2024-09-28 01:27:40.735985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:14:45.110 [2024-09-28 01:27:40.735998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.582 ms 00:14:45.110 [2024-09-28 01:27:40.736006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.111 [2024-09-28 01:27:40.760038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.111 [2024-09-28 01:27:40.760216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:45.111 [2024-09-28 01:27:40.760237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.988 ms 00:14:45.111 [2024-09-28 01:27:40.760245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.111 [2024-09-28 01:27:40.760288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.111 [2024-09-28 01:27:40.760298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:45.111 [2024-09-28 01:27:40.760310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:45.111 [2024-09-28 01:27:40.760318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.111 [2024-09-28 01:27:40.760401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.111 [2024-09-28 01:27:40.760414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:45.111 [2024-09-28 01:27:40.760425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:14:45.111 [2024-09-28 01:27:40.760434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.111 [2024-09-28 01:27:40.761330] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2469.204 ms, result 0 00:14:45.111 { 00:14:45.111 "name": "ftl0", 00:14:45.111 "uuid": "953f3a8b-1331-4313-91a0-41b5ca33b659" 00:14:45.111 } 00:14:45.111 01:27:40 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:45.111 01:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:14:45.111 01:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.111 01:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:14:45.111 01:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.111 01:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.111 01:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:45.111 01:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:45.368 [ 00:14:45.368 { 00:14:45.368 "name": "ftl0", 00:14:45.368 "aliases": [ 00:14:45.369 "953f3a8b-1331-4313-91a0-41b5ca33b659" 00:14:45.369 ], 00:14:45.369 "product_name": "FTL disk", 00:14:45.369 "block_size": 4096, 00:14:45.369 "num_blocks": 20971520, 00:14:45.369 "uuid": "953f3a8b-1331-4313-91a0-41b5ca33b659", 00:14:45.369 "assigned_rate_limits": { 00:14:45.369 "rw_ios_per_sec": 0, 00:14:45.369 "rw_mbytes_per_sec": 0, 00:14:45.369 "r_mbytes_per_sec": 0, 00:14:45.369 "w_mbytes_per_sec": 0 00:14:45.369 }, 00:14:45.369 "claimed": false, 00:14:45.369 "zoned": false, 00:14:45.369 "supported_io_types": { 00:14:45.369 "read": true, 00:14:45.369 "write": true, 00:14:45.369 "unmap": true, 00:14:45.369 "flush": true, 00:14:45.369 "reset": false, 00:14:45.369 "nvme_admin": false, 00:14:45.369 "nvme_io": false, 00:14:45.369 "nvme_io_md": false, 00:14:45.369 "write_zeroes": true, 00:14:45.369 "zcopy": false, 00:14:45.369 "get_zone_info": false, 00:14:45.369 "zone_management": false, 00:14:45.369 "zone_append": false, 00:14:45.369 "compare": false, 00:14:45.369 "compare_and_write": false, 00:14:45.369 "abort": false, 00:14:45.369 "seek_hole": false, 00:14:45.369 "seek_data": false, 00:14:45.369 "copy": false, 00:14:45.369 "nvme_iov_md": false 00:14:45.369 }, 00:14:45.369 "driver_specific": { 00:14:45.369 "ftl": { 00:14:45.369 "base_bdev": "e06ee12d-23ba-4c58-9cb8-5b663d5c5873", 00:14:45.369 "cache": "nvc0n1p0" 00:14:45.369 } 00:14:45.369 } 00:14:45.369 } 00:14:45.369 ] 00:14:45.369 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:14:45.369 01:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:45.369 01:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:45.627 01:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:14:45.627 01:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:45.886 [2024-09-28 01:27:41.582089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.582130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:45.886 [2024-09-28 01:27:41.582143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:45.886 [2024-09-28 01:27:41.582152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.582184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:45.886 [2024-09-28 01:27:41.584754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.584788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:45.886 [2024-09-28 01:27:41.584801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:14:45.886 [2024-09-28 01:27:41.584809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.585216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.585229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:45.886 [2024-09-28 01:27:41.585240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:14:45.886 [2024-09-28 01:27:41.585247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.588590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.588671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:45.886 [2024-09-28 01:27:41.588728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.317 ms 00:14:45.886 [2024-09-28 01:27:41.588755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.594928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.595025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:14:45.886 [2024-09-28 01:27:41.595080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:14:45.886 [2024-09-28 01:27:41.595106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.618110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.618233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:45.886 [2024-09-28 01:27:41.618292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.895 ms 00:14:45.886 [2024-09-28 01:27:41.618318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.632861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.632964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:45.886 [2024-09-28 01:27:41.633019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.490 ms 00:14:45.886 [2024-09-28 01:27:41.633046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.633239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.633325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:45.886 [2024-09-28 01:27:41.633351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:14:45.886 [2024-09-28 01:27:41.633409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.656277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.656376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:14:45.886 [2024-09-28 01:27:41.656429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.825 ms 00:14:45.886 [2024-09-28 01:27:41.656455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.678845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.678942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:14:45.886 [2024-09-28 01:27:41.678995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.344 ms 00:14:45.886 [2024-09-28 01:27:41.679021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.701016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.701119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:45.886 [2024-09-28 01:27:41.701174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.945 ms 00:14:45.886 [2024-09-28 01:27:41.701218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.723683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.886 [2024-09-28 01:27:41.723781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:45.886 [2024-09-28 01:27:41.723835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.370 ms 00:14:45.886 [2024-09-28 01:27:41.723861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.886 [2024-09-28 01:27:41.723911] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:45.886 [2024-09-28 01:27:41.723944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.723981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.724904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.725006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.725043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.725076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.725112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.725180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:45.886 [2024-09-28 01:27:41.725229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.725998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.726974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:45.887 [2024-09-28 01:27:41.727678] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:45.887 [2024-09-28 01:27:41.727688] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 953f3a8b-1331-4313-91a0-41b5ca33b659 00:14:45.887 [2024-09-28 01:27:41.727696] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:45.887 [2024-09-28 01:27:41.727706] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:45.887 [2024-09-28 01:27:41.727713] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:45.887 [2024-09-28 01:27:41.727722] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:45.887 [2024-09-28 01:27:41.727730] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:45.887 [2024-09-28 01:27:41.727739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:45.887 [2024-09-28 01:27:41.727746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:45.887 [2024-09-28 01:27:41.727754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:45.887 [2024-09-28 01:27:41.727760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:45.887 [2024-09-28 01:27:41.727769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.887 [2024-09-28 01:27:41.727776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:45.887 [2024-09-28 01:27:41.727786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.859 ms 00:14:45.887 [2024-09-28 01:27:41.727794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.887 [2024-09-28 01:27:41.740104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.887 [2024-09-28 01:27:41.740131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:45.887 [2024-09-28 01:27:41.740144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.270 ms 00:14:45.887 [2024-09-28 01:27:41.740152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.887 [2024-09-28 01:27:41.740538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:45.887 [2024-09-28 01:27:41.740557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:45.888 [2024-09-28 01:27:41.740568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:14:45.888 [2024-09-28 01:27:41.740575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.888 [2024-09-28 01:27:41.783949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:45.888 [2024-09-28 01:27:41.784059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:45.888 [2024-09-28 01:27:41.784112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:45.888 [2024-09-28 01:27:41.784139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.888 [2024-09-28 01:27:41.784218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:45.888 [2024-09-28 01:27:41.784246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:45.888 [2024-09-28 01:27:41.784271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:45.888 [2024-09-28 01:27:41.784294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.888 [2024-09-28 01:27:41.784397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:45.888 [2024-09-28 01:27:41.784479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:45.888 [2024-09-28 01:27:41.784509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:45.888 [2024-09-28 01:27:41.784533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:45.888 [2024-09-28 01:27:41.784574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:45.888 [2024-09-28 01:27:41.784599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:45.888 [2024-09-28 01:27:41.784627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:45.888 [2024-09-28 01:27:41.784686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.146 [2024-09-28 01:27:41.864105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.146 [2024-09-28 01:27:41.864254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:46.146 [2024-09-28 01:27:41.864309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.146 [2024-09-28 01:27:41.864332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.146 [2024-09-28 01:27:41.925749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.146 [2024-09-28 01:27:41.925879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:46.146 [2024-09-28 01:27:41.925935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.146 [2024-09-28 01:27:41.925961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.146 [2024-09-28 01:27:41.926048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.146 [2024-09-28 01:27:41.926076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:46.146 [2024-09-28 01:27:41.926101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.146 [2024-09-28 01:27:41.926125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.146 [2024-09-28 01:27:41.926227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.146 [2024-09-28 01:27:41.926309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:46.146 [2024-09-28 01:27:41.926340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.146 [2024-09-28 01:27:41.926363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.146 [2024-09-28 01:27:41.926488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.146 [2024-09-28 01:27:41.926546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:46.146 [2024-09-28 01:27:41.926572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.146 [2024-09-28 01:27:41.926595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.146 [2024-09-28 01:27:41.926691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.146 [2024-09-28 01:27:41.926720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:46.146 [2024-09-28 01:27:41.926746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.146 [2024-09-28 01:27:41.926821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.146 [2024-09-28 01:27:41.926906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.146 [2024-09-28 01:27:41.926932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:46.146 [2024-09-28 01:27:41.926958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.146 [2024-09-28 01:27:41.926977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.146 [2024-09-28 01:27:41.927041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:46.146 [2024-09-28 01:27:41.927162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:46.146 [2024-09-28 01:27:41.927202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:46.146 [2024-09-28 01:27:41.927226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:46.146 [2024-09-28 01:27:41.927398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.278 ms, result 0 00:14:46.146 true 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72732 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72732 ']' 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72732 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72732 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72732' 00:14:46.146 killing process with pid 72732 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72732 00:14:46.146 01:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72732 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:52.705 01:27:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:52.705 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:52.705 fio-3.35 00:14:52.705 Starting 1 thread 00:14:59.291 00:14:59.291 test: (groupid=0, jobs=1): err= 0: pid=72917: Sat Sep 28 01:27:54 2024 00:14:59.291 read: IOPS=902, BW=59.9MiB/s (62.8MB/s)(255MiB/4248msec) 00:14:59.291 slat (usec): min=2, max=116, avg= 6.14, stdev= 3.43 00:14:59.291 clat (usec): min=268, max=5467, avg=501.97, stdev=215.74 00:14:59.291 lat (usec): min=273, max=5472, avg=508.11, stdev=216.43 00:14:59.291 clat percentiles (usec): 00:14:59.291 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 310], 20.00th=[ 326], 00:14:59.291 | 30.00th=[ 351], 40.00th=[ 429], 50.00th=[ 482], 60.00th=[ 523], 00:14:59.291 | 70.00th=[ 553], 80.00th=[ 586], 90.00th=[ 824], 95.00th=[ 898], 00:14:59.291 | 99.00th=[ 1057], 99.50th=[ 1172], 99.90th=[ 2671], 99.95th=[ 3916], 00:14:59.291 | 99.99th=[ 5473] 00:14:59.291 write: IOPS=908, BW=60.3MiB/s (63.3MB/s)(256MiB/4244msec); 0 zone resets 00:14:59.291 slat (nsec): min=14172, max=97078, avg=23661.19, stdev=6479.25 00:14:59.291 clat (usec): min=286, max=1748, avg=559.37, stdev=210.34 00:14:59.291 lat (usec): min=309, max=1766, avg=583.03, stdev=210.92 00:14:59.291 clat percentiles (usec): 00:14:59.291 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 347], 00:14:59.291 | 30.00th=[ 379], 40.00th=[ 498], 50.00th=[ 562], 60.00th=[ 586], 00:14:59.291 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 914], 95.00th=[ 988], 00:14:59.291 | 99.00th=[ 1139], 99.50th=[ 1188], 99.90th=[ 1467], 99.95th=[ 1729], 00:14:59.291 | 99.99th=[ 1745] 00:14:59.291 bw ( KiB/s): min=47056, max=89896, per=99.28%, avg=61339.25, stdev=13788.06, samples=8 00:14:59.291 iops : min= 692, max= 1322, avg=902.00, stdev=202.79, samples=8 00:14:59.291 lat (usec) : 500=48.20%, 750=38.50%, 1000=10.43% 00:14:59.291 lat (msec) : 2=2.81%, 4=0.05%, 10=0.01% 00:14:59.291 cpu : usr=99.11%, sys=0.05%, ctx=13, majf=0, minf=1169 00:14:59.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.291 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.291 00:14:59.291 Run status group 0 (all jobs): 00:14:59.291 READ: bw=59.9MiB/s (62.8MB/s), 59.9MiB/s-59.9MiB/s (62.8MB/s-62.8MB/s), io=255MiB (267MB), run=4248-4248msec 00:14:59.291 WRITE: bw=60.3MiB/s (63.3MB/s), 60.3MiB/s-60.3MiB/s (63.3MB/s-63.3MB/s), io=256MiB (269MB), run=4244-4244msec 00:14:59.863 ----------------------------------------------------- 00:14:59.863 Suppressions used: 00:14:59.863 count bytes template 00:14:59.863 1 5 /usr/src/fio/parse.c 00:14:59.863 1 8 libtcmalloc_minimal.so 00:14:59.863 1 904 libcrypto.so 00:14:59.863 ----------------------------------------------------- 00:14:59.863 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.863 01:27:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:59.863 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:59.863 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:59.863 fio-3.35 00:14:59.863 Starting 2 threads 00:15:26.407 00:15:26.407 first_half: (groupid=0, jobs=1): err= 0: pid=73026: Sat Sep 28 01:28:19 2024 00:15:26.407 read: IOPS=2893, BW=11.3MiB/s (11.8MB/s)(255MiB/22553msec) 00:15:26.407 slat (usec): min=3, max=477, avg= 3.89, stdev= 2.04 00:15:26.407 clat (usec): min=612, max=290543, avg=33849.36, stdev=17031.47 00:15:26.407 lat (usec): min=617, max=290546, avg=33853.26, stdev=17031.59 00:15:26.407 clat percentiles (msec): 00:15:26.407 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 31], 00:15:26.407 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:15:26.407 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 42], 00:15:26.407 | 99.00th=[ 127], 99.50th=[ 146], 99.90th=[ 201], 99.95th=[ 251], 00:15:26.407 | 99.99th=[ 284] 00:15:26.407 write: IOPS=3445, BW=13.5MiB/s (14.1MB/s)(256MiB/19019msec); 0 zone resets 00:15:26.407 slat (usec): min=3, max=757, avg= 5.65, stdev= 4.82 00:15:26.407 clat (usec): min=383, max=81257, avg=10253.01, stdev=16339.44 00:15:26.407 lat (usec): min=392, max=81263, avg=10258.65, stdev=16339.53 00:15:26.407 clat percentiles (usec): 00:15:26.407 | 1.00th=[ 652], 5.00th=[ 734], 10.00th=[ 816], 20.00th=[ 1205], 00:15:26.407 | 30.00th=[ 3032], 40.00th=[ 4621], 50.00th=[ 5276], 60.00th=[ 5669], 00:15:26.407 | 70.00th=[ 6259], 80.00th=[10159], 90.00th=[27395], 95.00th=[60556], 00:15:26.407 | 99.00th=[66323], 99.50th=[67634], 99.90th=[74974], 99.95th=[76022], 00:15:26.407 | 99.99th=[80217] 00:15:26.407 bw ( KiB/s): min= 216, max=40520, per=82.69%, avg=22795.13, stdev=12598.05, samples=23 00:15:26.407 iops : min= 54, max=10130, avg=5698.78, stdev=3149.51, samples=23 00:15:26.407 lat (usec) : 500=0.02%, 750=2.99%, 1000=5.27% 00:15:26.407 lat (msec) : 2=4.15%, 4=6.08%, 10=22.97%, 20=4.64%, 50=47.91% 00:15:26.407 lat (msec) : 100=5.02%, 250=0.91%, 500=0.03% 00:15:26.408 cpu : usr=99.21%, sys=0.14%, ctx=72, majf=0, minf=5630 00:15:26.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:26.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.408 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.408 issued rwts: total=65246,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.408 second_half: (groupid=0, jobs=1): err= 0: pid=73027: Sat Sep 28 01:28:19 2024 00:15:26.408 read: IOPS=2906, BW=11.4MiB/s (11.9MB/s)(255MiB/22428msec) 00:15:26.408 slat (usec): min=2, max=106, avg= 4.98, stdev= 1.09 00:15:26.408 clat (usec): min=620, max=294580, avg=34455.11, stdev=15863.29 00:15:26.408 lat (usec): min=625, max=294583, avg=34460.10, stdev=15863.39 00:15:26.408 clat percentiles (msec): 00:15:26.408 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 31], 00:15:26.408 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:15:26.408 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 38], 95.00th=[ 49], 00:15:26.408 | 99.00th=[ 117], 99.50th=[ 142], 99.90th=[ 169], 99.95th=[ 199], 00:15:26.408 | 99.99th=[ 288] 00:15:26.408 write: IOPS=3806, BW=14.9MiB/s (15.6MB/s)(256MiB/17215msec); 0 zone resets 00:15:26.408 slat (usec): min=3, max=1286, avg= 6.67, stdev= 5.79 00:15:26.408 clat (usec): min=371, max=81220, avg=9515.04, stdev=16352.86 00:15:26.408 lat (usec): min=383, max=81225, avg=9521.71, stdev=16352.88 00:15:26.408 clat percentiles (usec): 00:15:26.408 | 1.00th=[ 660], 5.00th=[ 742], 10.00th=[ 807], 20.00th=[ 996], 00:15:26.408 | 30.00th=[ 1319], 40.00th=[ 2933], 50.00th=[ 3785], 60.00th=[ 4752], 00:15:26.408 | 70.00th=[ 5997], 80.00th=[10290], 90.00th=[23987], 95.00th=[60556], 00:15:26.408 | 99.00th=[66323], 99.50th=[68682], 99.90th=[73925], 99.95th=[76022], 00:15:26.408 | 99.99th=[80217] 00:15:26.408 bw ( KiB/s): min= 912, max=40640, per=95.09%, avg=26214.40, stdev=10687.83, samples=20 00:15:26.408 iops : min= 228, max=10160, avg=6553.60, stdev=2671.96, samples=20 00:15:26.408 lat (usec) : 500=0.03%, 750=2.85%, 1000=7.24% 00:15:26.408 lat (msec) : 2=6.98%, 4=9.22%, 10=13.74%, 20=5.62%, 50=47.95% 00:15:26.408 lat (msec) : 100=5.59%, 250=0.75%, 500=0.01% 00:15:26.408 cpu : usr=99.34%, sys=0.12%, ctx=35, majf=0, minf=5508 00:15:26.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:26.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.408 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.408 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.408 00:15:26.408 Run status group 0 (all jobs): 00:15:26.408 READ: bw=22.6MiB/s (23.7MB/s), 11.3MiB/s-11.4MiB/s (11.8MB/s-11.9MB/s), io=509MiB (534MB), run=22428-22553msec 00:15:26.408 WRITE: bw=26.9MiB/s (28.2MB/s), 13.5MiB/s-14.9MiB/s (14.1MB/s-15.6MB/s), io=512MiB (537MB), run=17215-19019msec 00:15:26.408 ----------------------------------------------------- 00:15:26.408 Suppressions used: 00:15:26.408 count bytes template 00:15:26.408 2 10 /usr/src/fio/parse.c 00:15:26.408 3 288 /usr/src/fio/iolog.c 00:15:26.408 1 8 libtcmalloc_minimal.so 00:15:26.408 1 904 libcrypto.so 00:15:26.408 ----------------------------------------------------- 00:15:26.408 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:26.408 01:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:26.408 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:26.408 fio-3.35 00:15:26.408 Starting 1 thread 00:15:41.307 00:15:41.307 test: (groupid=0, jobs=1): err= 0: pid=73317: Sat Sep 28 01:28:36 2024 00:15:41.307 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(255MiB/7983msec) 00:15:41.307 slat (nsec): min=3023, max=20024, avg=3854.97, stdev=974.12 00:15:41.307 clat (usec): min=490, max=31965, avg=15664.15, stdev=1570.88 00:15:41.307 lat (usec): min=494, max=31969, avg=15668.01, stdev=1571.07 00:15:41.307 clat percentiles (usec): 00:15:41.307 | 1.00th=[14353], 5.00th=[14615], 10.00th=[14746], 20.00th=[14877], 00:15:41.307 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15401], 00:15:41.307 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16581], 95.00th=[17433], 00:15:41.307 | 99.00th=[23200], 99.50th=[24249], 99.90th=[25560], 99.95th=[27919], 00:15:41.307 | 99.99th=[31327] 00:15:41.307 write: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(256MiB/5594msec); 0 zone resets 00:15:41.307 slat (usec): min=4, max=357, avg= 6.36, stdev= 3.46 00:15:41.307 clat (usec): min=521, max=62146, avg=10867.78, stdev=13432.39 00:15:41.307 lat (usec): min=527, max=62151, avg=10874.14, stdev=13432.42 00:15:41.307 clat percentiles (usec): 00:15:41.307 | 1.00th=[ 668], 5.00th=[ 848], 10.00th=[ 971], 20.00th=[ 1270], 00:15:41.307 | 30.00th=[ 1713], 40.00th=[ 2704], 50.00th=[ 4752], 60.00th=[ 7832], 00:15:41.307 | 70.00th=[12649], 80.00th=[16057], 90.00th=[33817], 95.00th=[43254], 00:15:41.307 | 99.00th=[53216], 99.50th=[55313], 99.90th=[59507], 99.95th=[60031], 00:15:41.307 | 99.99th=[61604] 00:15:41.307 bw ( KiB/s): min= 9852, max=67984, per=93.23%, avg=43689.00, stdev=15751.84, samples=12 00:15:41.307 iops : min= 2463, max=16996, avg=10922.25, stdev=3937.96, samples=12 00:15:41.307 lat (usec) : 500=0.01%, 750=1.29%, 1000=4.39% 00:15:41.307 lat (msec) : 2=12.04%, 4=4.36%, 10=9.96%, 20=58.29%, 50=8.62% 00:15:41.307 lat (msec) : 100=1.05% 00:15:41.307 cpu : usr=99.11%, sys=0.19%, ctx=28, majf=0, minf=5565 00:15:41.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:41.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.307 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.307 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.307 00:15:41.307 Run status group 0 (all jobs): 00:15:41.307 READ: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=255MiB (267MB), run=7983-7983msec 00:15:41.307 WRITE: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=256MiB (268MB), run=5594-5594msec 00:15:41.879 ----------------------------------------------------- 00:15:41.879 Suppressions used: 00:15:41.879 count bytes template 00:15:41.879 1 5 /usr/src/fio/parse.c 00:15:41.879 2 192 /usr/src/fio/iolog.c 00:15:41.879 1 8 libtcmalloc_minimal.so 00:15:41.879 1 904 libcrypto.so 00:15:41.879 ----------------------------------------------------- 00:15:41.879 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:42.140 Remove shared memory files 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57462 /dev/shm/spdk_tgt_trace.pid71644 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:42.140 ************************************ 00:15:42.140 END TEST ftl_fio_basic 00:15:42.140 ************************************ 00:15:42.140 00:15:42.140 real 1m3.376s 00:15:42.140 user 2m13.609s 00:15:42.140 sys 0m2.881s 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.140 01:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:42.140 01:28:37 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:42.140 01:28:37 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:42.140 01:28:37 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.140 01:28:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:42.140 ************************************ 00:15:42.140 START TEST ftl_bdevperf 00:15:42.140 ************************************ 00:15:42.140 01:28:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:42.140 * Looking for test storage... 00:15:42.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.140 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:42.140 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:42.140 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:42.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.403 --rc genhtml_branch_coverage=1 00:15:42.403 --rc genhtml_function_coverage=1 00:15:42.403 --rc genhtml_legend=1 00:15:42.403 --rc geninfo_all_blocks=1 00:15:42.403 --rc geninfo_unexecuted_blocks=1 00:15:42.403 00:15:42.403 ' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:42.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.403 --rc genhtml_branch_coverage=1 00:15:42.403 --rc genhtml_function_coverage=1 00:15:42.403 --rc genhtml_legend=1 00:15:42.403 --rc geninfo_all_blocks=1 00:15:42.403 --rc geninfo_unexecuted_blocks=1 00:15:42.403 00:15:42.403 ' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:42.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.403 --rc genhtml_branch_coverage=1 00:15:42.403 --rc genhtml_function_coverage=1 00:15:42.403 --rc genhtml_legend=1 00:15:42.403 --rc geninfo_all_blocks=1 00:15:42.403 --rc geninfo_unexecuted_blocks=1 00:15:42.403 00:15:42.403 ' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:42.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.403 --rc genhtml_branch_coverage=1 00:15:42.403 --rc genhtml_function_coverage=1 00:15:42.403 --rc genhtml_legend=1 00:15:42.403 --rc geninfo_all_blocks=1 00:15:42.403 --rc geninfo_unexecuted_blocks=1 00:15:42.403 00:15:42.403 ' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.403 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73561 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73561 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73561 ']' 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.404 01:28:38 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:42.404 [2024-09-28 01:28:38.205060] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:42.404 [2024-09-28 01:28:38.205489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73561 ] 00:15:42.663 [2024-09-28 01:28:38.359983] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.663 [2024-09-28 01:28:38.567857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.230 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.230 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:15:43.230 01:28:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:43.230 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:43.230 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:43.230 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:43.230 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:43.230 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:43.488 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:43.488 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:43.488 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:43.488 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:43.488 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:43.488 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:43.488 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:43.488 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:43.747 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:43.747 { 00:15:43.747 "name": "nvme0n1", 00:15:43.747 "aliases": [ 00:15:43.747 "9581458b-6866-4deb-a3b2-3bc88f14fa12" 00:15:43.747 ], 00:15:43.747 "product_name": "NVMe disk", 00:15:43.747 "block_size": 4096, 00:15:43.747 "num_blocks": 1310720, 00:15:43.747 "uuid": "9581458b-6866-4deb-a3b2-3bc88f14fa12", 00:15:43.747 "numa_id": -1, 00:15:43.747 "assigned_rate_limits": { 00:15:43.747 "rw_ios_per_sec": 0, 00:15:43.747 "rw_mbytes_per_sec": 0, 00:15:43.747 "r_mbytes_per_sec": 0, 00:15:43.747 "w_mbytes_per_sec": 0 00:15:43.747 }, 00:15:43.747 "claimed": true, 00:15:43.747 "claim_type": "read_many_write_one", 00:15:43.747 "zoned": false, 00:15:43.747 "supported_io_types": { 00:15:43.747 "read": true, 00:15:43.747 "write": true, 00:15:43.747 "unmap": true, 00:15:43.747 "flush": true, 00:15:43.747 "reset": true, 00:15:43.747 "nvme_admin": true, 00:15:43.747 "nvme_io": true, 00:15:43.747 "nvme_io_md": false, 00:15:43.747 "write_zeroes": true, 00:15:43.747 "zcopy": false, 00:15:43.747 "get_zone_info": false, 00:15:43.747 "zone_management": false, 00:15:43.747 "zone_append": false, 00:15:43.747 "compare": true, 00:15:43.747 "compare_and_write": false, 00:15:43.747 "abort": true, 00:15:43.747 "seek_hole": false, 00:15:43.747 "seek_data": false, 00:15:43.747 "copy": true, 00:15:43.747 "nvme_iov_md": false 00:15:43.747 }, 00:15:43.747 "driver_specific": { 00:15:43.747 "nvme": [ 00:15:43.747 { 00:15:43.747 "pci_address": "0000:00:11.0", 00:15:43.747 "trid": { 00:15:43.747 "trtype": "PCIe", 00:15:43.747 "traddr": "0000:00:11.0" 00:15:43.747 }, 00:15:43.747 "ctrlr_data": { 00:15:43.747 "cntlid": 0, 00:15:43.747 "vendor_id": "0x1b36", 00:15:43.747 "model_number": "QEMU NVMe Ctrl", 00:15:43.747 "serial_number": "12341", 00:15:43.747 "firmware_revision": "8.0.0", 00:15:43.747 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:43.747 "oacs": { 00:15:43.747 "security": 0, 00:15:43.747 "format": 1, 00:15:43.747 "firmware": 0, 00:15:43.747 "ns_manage": 1 00:15:43.747 }, 00:15:43.747 "multi_ctrlr": false, 00:15:43.747 "ana_reporting": false 00:15:43.747 }, 00:15:43.747 "vs": { 00:15:43.747 "nvme_version": "1.4" 00:15:43.747 }, 00:15:43.747 "ns_data": { 00:15:43.747 "id": 1, 00:15:43.747 "can_share": false 00:15:43.747 } 00:15:43.747 } 00:15:43.747 ], 00:15:43.747 "mp_policy": "active_passive" 00:15:43.748 } 00:15:43.748 } 00:15:43.748 ]' 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:43.748 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:44.006 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=1cd41022-1bdb-4932-8573-b1c2004b2978 00:15:44.006 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:44.006 01:28:39 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1cd41022-1bdb-4932-8573-b1c2004b2978 00:15:44.267 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:44.529 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=9df6c3d8-ace2-4dd2-b909-d4cc4903abf8 00:15:44.529 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9df6c3d8-ace2-4dd2-b909-d4cc4903abf8 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=17ff475c-0170-41f0-abfc-7f18be536c67 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 17ff475c-0170-41f0-abfc-7f18be536c67 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=17ff475c-0170-41f0-abfc-7f18be536c67 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 17ff475c-0170-41f0-abfc-7f18be536c67 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=17ff475c-0170-41f0-abfc-7f18be536c67 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 17ff475c-0170-41f0-abfc-7f18be536c67 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:44.791 { 00:15:44.791 "name": "17ff475c-0170-41f0-abfc-7f18be536c67", 00:15:44.791 "aliases": [ 00:15:44.791 "lvs/nvme0n1p0" 00:15:44.791 ], 00:15:44.791 "product_name": "Logical Volume", 00:15:44.791 "block_size": 4096, 00:15:44.791 "num_blocks": 26476544, 00:15:44.791 "uuid": "17ff475c-0170-41f0-abfc-7f18be536c67", 00:15:44.791 "assigned_rate_limits": { 00:15:44.791 "rw_ios_per_sec": 0, 00:15:44.791 "rw_mbytes_per_sec": 0, 00:15:44.791 "r_mbytes_per_sec": 0, 00:15:44.791 "w_mbytes_per_sec": 0 00:15:44.791 }, 00:15:44.791 "claimed": false, 00:15:44.791 "zoned": false, 00:15:44.791 "supported_io_types": { 00:15:44.791 "read": true, 00:15:44.791 "write": true, 00:15:44.791 "unmap": true, 00:15:44.791 "flush": false, 00:15:44.791 "reset": true, 00:15:44.791 "nvme_admin": false, 00:15:44.791 "nvme_io": false, 00:15:44.791 "nvme_io_md": false, 00:15:44.791 "write_zeroes": true, 00:15:44.791 "zcopy": false, 00:15:44.791 "get_zone_info": false, 00:15:44.791 "zone_management": false, 00:15:44.791 "zone_append": false, 00:15:44.791 "compare": false, 00:15:44.791 "compare_and_write": false, 00:15:44.791 "abort": false, 00:15:44.791 "seek_hole": true, 00:15:44.791 "seek_data": true, 00:15:44.791 "copy": false, 00:15:44.791 "nvme_iov_md": false 00:15:44.791 }, 00:15:44.791 "driver_specific": { 00:15:44.791 "lvol": { 00:15:44.791 "lvol_store_uuid": "9df6c3d8-ace2-4dd2-b909-d4cc4903abf8", 00:15:44.791 "base_bdev": "nvme0n1", 00:15:44.791 "thin_provision": true, 00:15:44.791 "num_allocated_clusters": 0, 00:15:44.791 "snapshot": false, 00:15:44.791 "clone": false, 00:15:44.791 "esnap_clone": false 00:15:44.791 } 00:15:44.791 } 00:15:44.791 } 00:15:44.791 ]' 00:15:44.791 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:45.053 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:45.053 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:45.053 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:45.053 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:45.053 01:28:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:45.053 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:45.053 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:45.053 01:28:40 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:45.314 01:28:41 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:45.314 01:28:41 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:45.314 01:28:41 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 17ff475c-0170-41f0-abfc-7f18be536c67 00:15:45.314 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=17ff475c-0170-41f0-abfc-7f18be536c67 00:15:45.314 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:45.314 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:45.314 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:45.314 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 17ff475c-0170-41f0-abfc-7f18be536c67 00:15:45.576 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:45.576 { 00:15:45.576 "name": "17ff475c-0170-41f0-abfc-7f18be536c67", 00:15:45.576 "aliases": [ 00:15:45.576 "lvs/nvme0n1p0" 00:15:45.576 ], 00:15:45.576 "product_name": "Logical Volume", 00:15:45.576 "block_size": 4096, 00:15:45.576 "num_blocks": 26476544, 00:15:45.576 "uuid": "17ff475c-0170-41f0-abfc-7f18be536c67", 00:15:45.576 "assigned_rate_limits": { 00:15:45.576 "rw_ios_per_sec": 0, 00:15:45.576 "rw_mbytes_per_sec": 0, 00:15:45.576 "r_mbytes_per_sec": 0, 00:15:45.576 "w_mbytes_per_sec": 0 00:15:45.576 }, 00:15:45.576 "claimed": false, 00:15:45.576 "zoned": false, 00:15:45.576 "supported_io_types": { 00:15:45.576 "read": true, 00:15:45.576 "write": true, 00:15:45.576 "unmap": true, 00:15:45.576 "flush": false, 00:15:45.576 "reset": true, 00:15:45.576 "nvme_admin": false, 00:15:45.576 "nvme_io": false, 00:15:45.576 "nvme_io_md": false, 00:15:45.576 "write_zeroes": true, 00:15:45.576 "zcopy": false, 00:15:45.576 "get_zone_info": false, 00:15:45.576 "zone_management": false, 00:15:45.576 "zone_append": false, 00:15:45.576 "compare": false, 00:15:45.576 "compare_and_write": false, 00:15:45.576 "abort": false, 00:15:45.576 "seek_hole": true, 00:15:45.576 "seek_data": true, 00:15:45.576 "copy": false, 00:15:45.576 "nvme_iov_md": false 00:15:45.576 }, 00:15:45.576 "driver_specific": { 00:15:45.576 "lvol": { 00:15:45.576 "lvol_store_uuid": "9df6c3d8-ace2-4dd2-b909-d4cc4903abf8", 00:15:45.576 "base_bdev": "nvme0n1", 00:15:45.576 "thin_provision": true, 00:15:45.576 "num_allocated_clusters": 0, 00:15:45.576 "snapshot": false, 00:15:45.576 "clone": false, 00:15:45.576 "esnap_clone": false 00:15:45.576 } 00:15:45.576 } 00:15:45.576 } 00:15:45.576 ]' 00:15:45.576 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:45.576 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:45.576 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:45.576 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:45.576 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:45.576 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:45.576 01:28:41 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:15:45.576 01:28:41 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:45.837 01:28:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:15:45.837 01:28:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 17ff475c-0170-41f0-abfc-7f18be536c67 00:15:45.837 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=17ff475c-0170-41f0-abfc-7f18be536c67 00:15:45.837 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:45.837 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:45.837 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:45.837 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 17ff475c-0170-41f0-abfc-7f18be536c67 00:15:45.837 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:45.837 { 00:15:45.837 "name": "17ff475c-0170-41f0-abfc-7f18be536c67", 00:15:45.837 "aliases": [ 00:15:45.837 "lvs/nvme0n1p0" 00:15:45.837 ], 00:15:45.837 "product_name": "Logical Volume", 00:15:45.837 "block_size": 4096, 00:15:45.837 "num_blocks": 26476544, 00:15:45.837 "uuid": "17ff475c-0170-41f0-abfc-7f18be536c67", 00:15:45.837 "assigned_rate_limits": { 00:15:45.837 "rw_ios_per_sec": 0, 00:15:45.837 "rw_mbytes_per_sec": 0, 00:15:45.837 "r_mbytes_per_sec": 0, 00:15:45.837 "w_mbytes_per_sec": 0 00:15:45.837 }, 00:15:45.837 "claimed": false, 00:15:45.837 "zoned": false, 00:15:45.837 "supported_io_types": { 00:15:45.837 "read": true, 00:15:45.837 "write": true, 00:15:45.837 "unmap": true, 00:15:45.837 "flush": false, 00:15:45.837 "reset": true, 00:15:45.837 "nvme_admin": false, 00:15:45.837 "nvme_io": false, 00:15:45.837 "nvme_io_md": false, 00:15:45.837 "write_zeroes": true, 00:15:45.837 "zcopy": false, 00:15:45.837 "get_zone_info": false, 00:15:45.837 "zone_management": false, 00:15:45.837 "zone_append": false, 00:15:45.837 "compare": false, 00:15:45.837 "compare_and_write": false, 00:15:45.837 "abort": false, 00:15:45.837 "seek_hole": true, 00:15:45.837 "seek_data": true, 00:15:45.837 "copy": false, 00:15:45.837 "nvme_iov_md": false 00:15:45.837 }, 00:15:45.837 "driver_specific": { 00:15:45.838 "lvol": { 00:15:45.838 "lvol_store_uuid": "9df6c3d8-ace2-4dd2-b909-d4cc4903abf8", 00:15:45.838 "base_bdev": "nvme0n1", 00:15:45.838 "thin_provision": true, 00:15:45.838 "num_allocated_clusters": 0, 00:15:45.838 "snapshot": false, 00:15:45.838 "clone": false, 00:15:45.838 "esnap_clone": false 00:15:45.838 } 00:15:45.838 } 00:15:45.838 } 00:15:45.838 ]' 00:15:45.838 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:46.099 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:46.099 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:46.099 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:46.099 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:46.099 01:28:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:46.099 01:28:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:15:46.099 01:28:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 17ff475c-0170-41f0-abfc-7f18be536c67 -c nvc0n1p0 --l2p_dram_limit 20 00:15:46.099 [2024-09-28 01:28:42.017313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.099 [2024-09-28 01:28:42.017457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:46.099 [2024-09-28 01:28:42.017519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:46.099 [2024-09-28 01:28:42.017546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.099 [2024-09-28 01:28:42.017623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.099 [2024-09-28 01:28:42.017651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:46.099 [2024-09-28 01:28:42.017672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:15:46.099 [2024-09-28 01:28:42.017693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.099 [2024-09-28 01:28:42.017722] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:46.099 [2024-09-28 01:28:42.018631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:46.099 [2024-09-28 01:28:42.018711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.099 [2024-09-28 01:28:42.018724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:46.099 [2024-09-28 01:28:42.018734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:15:46.099 [2024-09-28 01:28:42.018745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.099 [2024-09-28 01:28:42.018775] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7385b2a4-6554-4f65-8a62-87f6b40bb4cf 00:15:46.099 [2024-09-28 01:28:42.019860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.099 [2024-09-28 01:28:42.019894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:46.099 [2024-09-28 01:28:42.019908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:15:46.099 [2024-09-28 01:28:42.019916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.099 [2024-09-28 01:28:42.025268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.099 [2024-09-28 01:28:42.025298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:46.099 [2024-09-28 01:28:42.025309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.286 ms 00:15:46.099 [2024-09-28 01:28:42.025316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.099 [2024-09-28 01:28:42.025398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.099 [2024-09-28 01:28:42.025407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:46.099 [2024-09-28 01:28:42.025420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:15:46.099 [2024-09-28 01:28:42.025427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.099 [2024-09-28 01:28:42.025466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.100 [2024-09-28 01:28:42.025475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:46.100 [2024-09-28 01:28:42.025486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:46.100 [2024-09-28 01:28:42.025494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.100 [2024-09-28 01:28:42.025515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:46.360 [2024-09-28 01:28:42.029236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.360 [2024-09-28 01:28:42.029267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:46.360 [2024-09-28 01:28:42.029276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.728 ms 00:15:46.360 [2024-09-28 01:28:42.029286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.360 [2024-09-28 01:28:42.029325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.360 [2024-09-28 01:28:42.029335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:46.360 [2024-09-28 01:28:42.029343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:46.360 [2024-09-28 01:28:42.029352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.360 [2024-09-28 01:28:42.029370] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:46.360 [2024-09-28 01:28:42.029508] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:46.360 [2024-09-28 01:28:42.029520] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:46.360 [2024-09-28 01:28:42.029533] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:46.360 [2024-09-28 01:28:42.029543] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:46.360 [2024-09-28 01:28:42.029553] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:46.360 [2024-09-28 01:28:42.029560] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:46.360 [2024-09-28 01:28:42.029571] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:46.360 [2024-09-28 01:28:42.029578] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:46.360 [2024-09-28 01:28:42.029587] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:46.360 [2024-09-28 01:28:42.029594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.360 [2024-09-28 01:28:42.029603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:46.360 [2024-09-28 01:28:42.029611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:15:46.360 [2024-09-28 01:28:42.029619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.360 [2024-09-28 01:28:42.029712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.360 [2024-09-28 01:28:42.029724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:46.360 [2024-09-28 01:28:42.029731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:15:46.360 [2024-09-28 01:28:42.029744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.360 [2024-09-28 01:28:42.029832] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:46.360 [2024-09-28 01:28:42.029843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:46.360 [2024-09-28 01:28:42.029851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:46.360 [2024-09-28 01:28:42.029860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.360 [2024-09-28 01:28:42.029868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:46.360 [2024-09-28 01:28:42.029876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:46.360 [2024-09-28 01:28:42.029883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:46.360 [2024-09-28 01:28:42.029891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:46.360 [2024-09-28 01:28:42.029898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:46.360 [2024-09-28 01:28:42.029906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:46.360 [2024-09-28 01:28:42.029913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:46.360 [2024-09-28 01:28:42.029928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:46.360 [2024-09-28 01:28:42.029935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:46.360 [2024-09-28 01:28:42.029943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:46.360 [2024-09-28 01:28:42.029950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:46.360 [2024-09-28 01:28:42.029959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.360 [2024-09-28 01:28:42.029966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:46.360 [2024-09-28 01:28:42.029978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:46.360 [2024-09-28 01:28:42.029984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.360 [2024-09-28 01:28:42.029994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:46.360 [2024-09-28 01:28:42.030001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:46.360 [2024-09-28 01:28:42.030009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:46.360 [2024-09-28 01:28:42.030016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:46.361 [2024-09-28 01:28:42.030024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:46.361 [2024-09-28 01:28:42.030031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:46.361 [2024-09-28 01:28:42.030039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:46.361 [2024-09-28 01:28:42.030046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:46.361 [2024-09-28 01:28:42.030054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:46.361 [2024-09-28 01:28:42.030061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:46.361 [2024-09-28 01:28:42.030069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:46.361 [2024-09-28 01:28:42.030076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:46.361 [2024-09-28 01:28:42.030086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:46.361 [2024-09-28 01:28:42.030092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:46.361 [2024-09-28 01:28:42.030100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:46.361 [2024-09-28 01:28:42.030107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:46.361 [2024-09-28 01:28:42.030115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:46.361 [2024-09-28 01:28:42.030121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:46.361 [2024-09-28 01:28:42.030129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:46.361 [2024-09-28 01:28:42.030136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:46.361 [2024-09-28 01:28:42.030144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.361 [2024-09-28 01:28:42.030150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:46.361 [2024-09-28 01:28:42.030158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:46.361 [2024-09-28 01:28:42.030164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.361 [2024-09-28 01:28:42.030172] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:46.361 [2024-09-28 01:28:42.030180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:46.361 [2024-09-28 01:28:42.030188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:46.361 [2024-09-28 01:28:42.030207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.361 [2024-09-28 01:28:42.030221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:46.361 [2024-09-28 01:28:42.030228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:46.361 [2024-09-28 01:28:42.030239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:46.361 [2024-09-28 01:28:42.030246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:46.361 [2024-09-28 01:28:42.030254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:46.361 [2024-09-28 01:28:42.030261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:46.361 [2024-09-28 01:28:42.030272] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:46.361 [2024-09-28 01:28:42.030284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:46.361 [2024-09-28 01:28:42.030294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:46.361 [2024-09-28 01:28:42.030301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:46.361 [2024-09-28 01:28:42.030310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:46.361 [2024-09-28 01:28:42.030317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:46.361 [2024-09-28 01:28:42.030326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:46.361 [2024-09-28 01:28:42.030333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:46.361 [2024-09-28 01:28:42.030342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:46.361 [2024-09-28 01:28:42.030349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:46.361 [2024-09-28 01:28:42.030359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:46.361 [2024-09-28 01:28:42.030366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:46.361 [2024-09-28 01:28:42.030374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:46.361 [2024-09-28 01:28:42.030381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:46.361 [2024-09-28 01:28:42.030390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:46.361 [2024-09-28 01:28:42.030398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:46.361 [2024-09-28 01:28:42.030407] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:46.361 [2024-09-28 01:28:42.030415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:46.361 [2024-09-28 01:28:42.030425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:46.361 [2024-09-28 01:28:42.030432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:46.361 [2024-09-28 01:28:42.030441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:46.361 [2024-09-28 01:28:42.030448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:46.361 [2024-09-28 01:28:42.030457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.361 [2024-09-28 01:28:42.030465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:46.361 [2024-09-28 01:28:42.030473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:15:46.361 [2024-09-28 01:28:42.030481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.361 [2024-09-28 01:28:42.030514] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:46.361 [2024-09-28 01:28:42.030523] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:48.906 [2024-09-28 01:28:44.743691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.906 [2024-09-28 01:28:44.743747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:48.906 [2024-09-28 01:28:44.743764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2713.165 ms 00:15:48.906 [2024-09-28 01:28:44.743773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.906 [2024-09-28 01:28:44.779813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.906 [2024-09-28 01:28:44.779858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:48.906 [2024-09-28 01:28:44.779874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.839 ms 00:15:48.906 [2024-09-28 01:28:44.779883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.906 [2024-09-28 01:28:44.780011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.906 [2024-09-28 01:28:44.780022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:48.906 [2024-09-28 01:28:44.780037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:48.906 [2024-09-28 01:28:44.780044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.906 [2024-09-28 01:28:44.810730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.906 [2024-09-28 01:28:44.810767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:48.906 [2024-09-28 01:28:44.810784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.650 ms 00:15:48.906 [2024-09-28 01:28:44.810793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.906 [2024-09-28 01:28:44.810824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.906 [2024-09-28 01:28:44.810833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:48.906 [2024-09-28 01:28:44.810844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:48.906 [2024-09-28 01:28:44.810852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.906 [2024-09-28 01:28:44.811249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.906 [2024-09-28 01:28:44.811267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:48.906 [2024-09-28 01:28:44.811279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:15:48.906 [2024-09-28 01:28:44.811287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.906 [2024-09-28 01:28:44.811410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.906 [2024-09-28 01:28:44.811421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:48.906 [2024-09-28 01:28:44.811433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:15:48.906 [2024-09-28 01:28:44.811440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.906 [2024-09-28 01:28:44.824221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.906 [2024-09-28 01:28:44.824250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:48.906 [2024-09-28 01:28:44.824262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.764 ms 00:15:48.906 [2024-09-28 01:28:44.824269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.906 [2024-09-28 01:28:44.835609] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:49.168 [2024-09-28 01:28:44.840587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.168 [2024-09-28 01:28:44.840622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:49.168 [2024-09-28 01:28:44.840633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.257 ms 00:15:49.168 [2024-09-28 01:28:44.840643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.168 [2024-09-28 01:28:44.910606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.168 [2024-09-28 01:28:44.910838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:49.168 [2024-09-28 01:28:44.910858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.941 ms 00:15:49.168 [2024-09-28 01:28:44.910869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.168 [2024-09-28 01:28:44.911040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.168 [2024-09-28 01:28:44.911055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:49.168 [2024-09-28 01:28:44.911063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:15:49.168 [2024-09-28 01:28:44.911072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.168 [2024-09-28 01:28:44.934609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.168 [2024-09-28 01:28:44.934644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:49.168 [2024-09-28 01:28:44.934655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.481 ms 00:15:49.168 [2024-09-28 01:28:44.934667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.168 [2024-09-28 01:28:44.956992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.168 [2024-09-28 01:28:44.957026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:49.168 [2024-09-28 01:28:44.957038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.294 ms 00:15:49.168 [2024-09-28 01:28:44.957046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.168 [2024-09-28 01:28:44.957635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.168 [2024-09-28 01:28:44.957656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:49.168 [2024-09-28 01:28:44.957667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:15:49.168 [2024-09-28 01:28:44.957676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.168 [2024-09-28 01:28:45.031599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.168 [2024-09-28 01:28:45.031646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:49.168 [2024-09-28 01:28:45.031661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.894 ms 00:15:49.168 [2024-09-28 01:28:45.031671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.168 [2024-09-28 01:28:45.056298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.168 [2024-09-28 01:28:45.056433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:49.168 [2024-09-28 01:28:45.056452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.559 ms 00:15:49.168 [2024-09-28 01:28:45.056462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.168 [2024-09-28 01:28:45.079885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.168 [2024-09-28 01:28:45.080011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:49.168 [2024-09-28 01:28:45.080027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.392 ms 00:15:49.168 [2024-09-28 01:28:45.080037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.430 [2024-09-28 01:28:45.103558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.430 [2024-09-28 01:28:45.103593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:49.430 [2024-09-28 01:28:45.103604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.492 ms 00:15:49.430 [2024-09-28 01:28:45.103614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.430 [2024-09-28 01:28:45.103650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.430 [2024-09-28 01:28:45.103664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:49.430 [2024-09-28 01:28:45.103672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:49.430 [2024-09-28 01:28:45.103682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.430 [2024-09-28 01:28:45.103755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:49.430 [2024-09-28 01:28:45.103768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:49.430 [2024-09-28 01:28:45.103776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:49.430 [2024-09-28 01:28:45.103786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:49.430 [2024-09-28 01:28:45.104672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3086.960 ms, result 0 00:15:49.430 { 00:15:49.430 "name": "ftl0", 00:15:49.430 "uuid": "7385b2a4-6554-4f65-8a62-87f6b40bb4cf" 00:15:49.430 } 00:15:49.430 01:28:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:15:49.430 01:28:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:49.430 01:28:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:15:49.430 01:28:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:49.692 [2024-09-28 01:28:45.416956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:49.692 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:49.692 Zero copy mechanism will not be used. 00:15:49.692 Running I/O for 4 seconds... 00:15:53.912 849.00 IOPS, 56.38 MiB/s 986.00 IOPS, 65.48 MiB/s 882.67 IOPS, 58.61 MiB/s 846.25 IOPS, 56.20 MiB/s 00:15:53.912 Latency(us) 00:15:53.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.912 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:53.912 ftl0 : 4.00 845.91 56.17 0.00 0.00 1241.55 297.75 4637.93 00:15:53.912 =================================================================================================================== 00:15:53.912 Total : 845.91 56.17 0.00 0.00 1241.55 297.75 4637.93 00:15:53.912 [2024-09-28 01:28:49.427825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:53.912 { 00:15:53.912 "results": [ 00:15:53.912 { 00:15:53.912 "job": "ftl0", 00:15:53.912 "core_mask": "0x1", 00:15:53.912 "workload": "randwrite", 00:15:53.912 "status": "finished", 00:15:53.912 "queue_depth": 1, 00:15:53.912 "io_size": 69632, 00:15:53.912 "runtime": 4.002779, 00:15:53.912 "iops": 845.9123024278882, 00:15:53.912 "mibps": 56.173863833101954, 00:15:53.912 "io_failed": 0, 00:15:53.912 "io_timeout": 0, 00:15:53.912 "avg_latency_us": 1241.5475414603118, 00:15:53.912 "min_latency_us": 297.7476923076923, 00:15:53.912 "max_latency_us": 4637.932307692307 00:15:53.912 } 00:15:53.912 ], 00:15:53.912 "core_count": 1 00:15:53.912 } 00:15:53.912 01:28:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:53.912 [2024-09-28 01:28:49.527262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:53.912 Running I/O for 4 seconds... 00:15:57.681 5416.00 IOPS, 21.16 MiB/s 5093.50 IOPS, 19.90 MiB/s 5179.00 IOPS, 20.23 MiB/s 5206.50 IOPS, 20.34 MiB/s 00:15:57.681 Latency(us) 00:15:57.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.681 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.681 ftl0 : 4.03 5195.37 20.29 0.00 0.00 24527.09 478.92 50613.96 00:15:57.681 =================================================================================================================== 00:15:57.681 Total : 5195.37 20.29 0.00 0.00 24527.09 0.00 50613.96 00:15:57.681 [2024-09-28 01:28:53.571658] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:57.681 { 00:15:57.681 "results": [ 00:15:57.681 { 00:15:57.681 "job": "ftl0", 00:15:57.681 "core_mask": "0x1", 00:15:57.681 "workload": "randwrite", 00:15:57.681 "status": "finished", 00:15:57.681 "queue_depth": 128, 00:15:57.681 "io_size": 4096, 00:15:57.681 "runtime": 4.033204, 00:15:57.681 "iops": 5195.373207008622, 00:15:57.681 "mibps": 20.29442658987743, 00:15:57.681 "io_failed": 0, 00:15:57.681 "io_timeout": 0, 00:15:57.681 "avg_latency_us": 24527.09255526758, 00:15:57.681 "min_latency_us": 478.91692307692307, 00:15:57.681 "max_latency_us": 50613.95692307693 00:15:57.681 } 00:15:57.681 ], 00:15:57.681 "core_count": 1 00:15:57.681 } 00:15:57.681 01:28:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:57.943 [2024-09-28 01:28:53.677788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:57.943 Running I/O for 4 seconds... 00:16:02.156 4901.00 IOPS, 19.14 MiB/s 4929.00 IOPS, 19.25 MiB/s 4980.00 IOPS, 19.45 MiB/s 4932.75 IOPS, 19.27 MiB/s 00:16:02.156 Latency(us) 00:16:02.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.156 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:02.156 Verification LBA range: start 0x0 length 0x1400000 00:16:02.156 ftl0 : 4.02 4945.15 19.32 0.00 0.00 25806.01 333.98 41338.09 00:16:02.156 =================================================================================================================== 00:16:02.156 Total : 4945.15 19.32 0.00 0.00 25806.01 0.00 41338.09 00:16:02.156 [2024-09-28 01:28:57.708065] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:02.156 { 00:16:02.156 "results": [ 00:16:02.156 { 00:16:02.156 "job": "ftl0", 00:16:02.156 "core_mask": "0x1", 00:16:02.156 "workload": "verify", 00:16:02.156 "status": "finished", 00:16:02.156 "verify_range": { 00:16:02.156 "start": 0, 00:16:02.156 "length": 20971520 00:16:02.156 }, 00:16:02.156 "queue_depth": 128, 00:16:02.156 "io_size": 4096, 00:16:02.156 "runtime": 4.015448, 00:16:02.156 "iops": 4945.151823657036, 00:16:02.156 "mibps": 19.3169993111603, 00:16:02.156 "io_failed": 0, 00:16:02.156 "io_timeout": 0, 00:16:02.156 "avg_latency_us": 25806.00912772477, 00:16:02.156 "min_latency_us": 333.98153846153843, 00:16:02.156 "max_latency_us": 41338.092307692306 00:16:02.156 } 00:16:02.156 ], 00:16:02.156 "core_count": 1 00:16:02.156 } 00:16:02.156 01:28:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:02.156 [2024-09-28 01:28:57.909942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.156 [2024-09-28 01:28:57.910084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:02.156 [2024-09-28 01:28:57.910148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:02.156 [2024-09-28 01:28:57.910175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.156 [2024-09-28 01:28:57.910217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:02.156 [2024-09-28 01:28:57.912791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.156 [2024-09-28 01:28:57.912819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:02.156 [2024-09-28 01:28:57.912831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:16:02.156 [2024-09-28 01:28:57.912839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.156 [2024-09-28 01:28:57.915267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.156 [2024-09-28 01:28:57.915296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:02.156 [2024-09-28 01:28:57.915308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.402 ms 00:16:02.156 [2024-09-28 01:28:57.915316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.418 [2024-09-28 01:28:58.119427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.418 [2024-09-28 01:28:58.119468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:02.418 [2024-09-28 01:28:58.119485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 204.091 ms 00:16:02.418 [2024-09-28 01:28:58.119493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.418 [2024-09-28 01:28:58.125679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.418 [2024-09-28 01:28:58.125789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:02.418 [2024-09-28 01:28:58.125807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:16:02.418 [2024-09-28 01:28:58.125814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.418 [2024-09-28 01:28:58.149630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.418 [2024-09-28 01:28:58.149662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:02.418 [2024-09-28 01:28:58.149674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.760 ms 00:16:02.418 [2024-09-28 01:28:58.149682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.418 [2024-09-28 01:28:58.164738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.418 [2024-09-28 01:28:58.164863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:02.418 [2024-09-28 01:28:58.164884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.022 ms 00:16:02.418 [2024-09-28 01:28:58.164892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.418 [2024-09-28 01:28:58.165021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.418 [2024-09-28 01:28:58.165032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:02.418 [2024-09-28 01:28:58.165044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:16:02.418 [2024-09-28 01:28:58.165054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.418 [2024-09-28 01:28:58.188388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.418 [2024-09-28 01:28:58.188416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:02.418 [2024-09-28 01:28:58.188428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.317 ms 00:16:02.418 [2024-09-28 01:28:58.188435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.418 [2024-09-28 01:28:58.211606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.418 [2024-09-28 01:28:58.211708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:02.418 [2024-09-28 01:28:58.211726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.138 ms 00:16:02.418 [2024-09-28 01:28:58.211733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.418 [2024-09-28 01:28:58.233934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.418 [2024-09-28 01:28:58.233964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:02.418 [2024-09-28 01:28:58.233976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.171 ms 00:16:02.418 [2024-09-28 01:28:58.233983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.418 [2024-09-28 01:28:58.256306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.418 [2024-09-28 01:28:58.256334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:02.418 [2024-09-28 01:28:58.256348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.259 ms 00:16:02.418 [2024-09-28 01:28:58.256355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.418 [2024-09-28 01:28:58.256387] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:02.418 [2024-09-28 01:28:58.256402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:02.418 [2024-09-28 01:28:58.256678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.256994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:02.419 [2024-09-28 01:28:58.257309] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:02.419 [2024-09-28 01:28:58.257318] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7385b2a4-6554-4f65-8a62-87f6b40bb4cf 00:16:02.419 [2024-09-28 01:28:58.257345] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:02.419 [2024-09-28 01:28:58.257354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:02.419 [2024-09-28 01:28:58.257361] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:02.419 [2024-09-28 01:28:58.257370] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:02.419 [2024-09-28 01:28:58.257377] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:02.419 [2024-09-28 01:28:58.257386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:02.419 [2024-09-28 01:28:58.257393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:02.419 [2024-09-28 01:28:58.257402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:02.419 [2024-09-28 01:28:58.257409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:02.419 [2024-09-28 01:28:58.257418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.419 [2024-09-28 01:28:58.257435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:02.419 [2024-09-28 01:28:58.257447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:16:02.419 [2024-09-28 01:28:58.257454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.419 [2024-09-28 01:28:58.269854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.419 [2024-09-28 01:28:58.269880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:02.419 [2024-09-28 01:28:58.269892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.355 ms 00:16:02.419 [2024-09-28 01:28:58.269899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.419 [2024-09-28 01:28:58.270260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:02.419 [2024-09-28 01:28:58.270273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:02.419 [2024-09-28 01:28:58.270284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:16:02.419 [2024-09-28 01:28:58.270291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.419 [2024-09-28 01:28:58.300749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.419 [2024-09-28 01:28:58.300871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:02.419 [2024-09-28 01:28:58.300892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.419 [2024-09-28 01:28:58.300899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.419 [2024-09-28 01:28:58.300950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.419 [2024-09-28 01:28:58.300960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:02.420 [2024-09-28 01:28:58.300969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.420 [2024-09-28 01:28:58.300976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.420 [2024-09-28 01:28:58.301035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.420 [2024-09-28 01:28:58.301044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:02.420 [2024-09-28 01:28:58.301054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.420 [2024-09-28 01:28:58.301061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.420 [2024-09-28 01:28:58.301077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.420 [2024-09-28 01:28:58.301084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:02.420 [2024-09-28 01:28:58.301095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.420 [2024-09-28 01:28:58.301102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.681 [2024-09-28 01:28:58.378037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.681 [2024-09-28 01:28:58.378075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:02.681 [2024-09-28 01:28:58.378090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.681 [2024-09-28 01:28:58.378097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.681 [2024-09-28 01:28:58.440615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.681 [2024-09-28 01:28:58.440651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:02.681 [2024-09-28 01:28:58.440665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.681 [2024-09-28 01:28:58.440673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.681 [2024-09-28 01:28:58.440751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.681 [2024-09-28 01:28:58.440761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:02.681 [2024-09-28 01:28:58.440771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.681 [2024-09-28 01:28:58.440778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.681 [2024-09-28 01:28:58.440856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.681 [2024-09-28 01:28:58.440866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:02.681 [2024-09-28 01:28:58.440875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.681 [2024-09-28 01:28:58.440885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.681 [2024-09-28 01:28:58.440972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.681 [2024-09-28 01:28:58.440981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:02.681 [2024-09-28 01:28:58.440994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.681 [2024-09-28 01:28:58.441001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.681 [2024-09-28 01:28:58.441029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.681 [2024-09-28 01:28:58.441038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:02.681 [2024-09-28 01:28:58.441048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.681 [2024-09-28 01:28:58.441055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.681 [2024-09-28 01:28:58.441090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.681 [2024-09-28 01:28:58.441098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:02.681 [2024-09-28 01:28:58.441107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.682 [2024-09-28 01:28:58.441115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.682 [2024-09-28 01:28:58.441155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:02.682 [2024-09-28 01:28:58.441164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:02.682 [2024-09-28 01:28:58.441173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:02.682 [2024-09-28 01:28:58.441182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:02.682 [2024-09-28 01:28:58.441321] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.340 ms, result 0 00:16:02.682 true 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73561 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73561 ']' 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73561 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73561 00:16:02.682 killing process with pid 73561 00:16:02.682 Received shutdown signal, test time was about 4.000000 seconds 00:16:02.682 00:16:02.682 Latency(us) 00:16:02.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.682 =================================================================================================================== 00:16:02.682 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73561' 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73561 00:16:02.682 01:28:58 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73561 00:16:07.978 01:29:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:07.978 01:29:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:07.978 Remove shared memory files 00:16:07.978 01:29:03 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:07.978 01:29:03 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:07.978 01:29:03 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:07.978 01:29:03 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:07.978 01:29:03 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:07.978 01:29:03 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:07.978 ************************************ 00:16:07.978 END TEST ftl_bdevperf 00:16:07.978 ************************************ 00:16:07.978 00:16:07.978 real 0m25.849s 00:16:07.978 user 0m28.490s 00:16:07.978 sys 0m0.960s 00:16:07.978 01:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.978 01:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:07.978 01:29:03 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:07.978 01:29:03 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:07.978 01:29:03 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:07.979 01:29:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:07.979 ************************************ 00:16:07.979 START TEST ftl_trim 00:16:07.979 ************************************ 00:16:07.979 01:29:03 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:08.279 * Looking for test storage... 00:16:08.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:08.279 01:29:03 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:08.279 01:29:03 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:08.279 01:29:03 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:08.279 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.279 01:29:04 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:08.279 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.279 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:08.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.279 --rc genhtml_branch_coverage=1 00:16:08.279 --rc genhtml_function_coverage=1 00:16:08.279 --rc genhtml_legend=1 00:16:08.279 --rc geninfo_all_blocks=1 00:16:08.279 --rc geninfo_unexecuted_blocks=1 00:16:08.279 00:16:08.279 ' 00:16:08.279 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:08.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.279 --rc genhtml_branch_coverage=1 00:16:08.279 --rc genhtml_function_coverage=1 00:16:08.279 --rc genhtml_legend=1 00:16:08.279 --rc geninfo_all_blocks=1 00:16:08.279 --rc geninfo_unexecuted_blocks=1 00:16:08.279 00:16:08.279 ' 00:16:08.279 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:08.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.279 --rc genhtml_branch_coverage=1 00:16:08.279 --rc genhtml_function_coverage=1 00:16:08.279 --rc genhtml_legend=1 00:16:08.279 --rc geninfo_all_blocks=1 00:16:08.279 --rc geninfo_unexecuted_blocks=1 00:16:08.279 00:16:08.279 ' 00:16:08.279 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:08.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.279 --rc genhtml_branch_coverage=1 00:16:08.279 --rc genhtml_function_coverage=1 00:16:08.279 --rc genhtml_legend=1 00:16:08.279 --rc geninfo_all_blocks=1 00:16:08.279 --rc geninfo_unexecuted_blocks=1 00:16:08.279 00:16:08.279 ' 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:08.279 01:29:04 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:08.280 01:29:04 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73914 00:16:08.280 01:29:04 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73914 00:16:08.280 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73914 ']' 00:16:08.280 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.280 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.280 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.280 01:29:04 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:08.280 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.280 01:29:04 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:08.280 [2024-09-28 01:29:04.113106] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:08.280 [2024-09-28 01:29:04.113372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73914 ] 00:16:08.567 [2024-09-28 01:29:04.266313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:08.567 [2024-09-28 01:29:04.445115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.567 [2024-09-28 01:29:04.445728] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.567 [2024-09-28 01:29:04.445797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.156 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.156 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:09.156 01:29:05 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:09.156 01:29:05 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:09.156 01:29:05 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:09.156 01:29:05 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:09.156 01:29:05 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:09.156 01:29:05 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:09.417 01:29:05 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:09.417 01:29:05 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:09.417 01:29:05 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:09.417 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:09.417 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:09.417 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:09.417 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:09.417 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:09.679 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:09.679 { 00:16:09.679 "name": "nvme0n1", 00:16:09.679 "aliases": [ 00:16:09.679 "dc696c96-4ade-4ef7-9b48-966b6dec54c5" 00:16:09.679 ], 00:16:09.679 "product_name": "NVMe disk", 00:16:09.679 "block_size": 4096, 00:16:09.679 "num_blocks": 1310720, 00:16:09.679 "uuid": "dc696c96-4ade-4ef7-9b48-966b6dec54c5", 00:16:09.679 "numa_id": -1, 00:16:09.679 "assigned_rate_limits": { 00:16:09.679 "rw_ios_per_sec": 0, 00:16:09.679 "rw_mbytes_per_sec": 0, 00:16:09.679 "r_mbytes_per_sec": 0, 00:16:09.679 "w_mbytes_per_sec": 0 00:16:09.679 }, 00:16:09.679 "claimed": true, 00:16:09.679 "claim_type": "read_many_write_one", 00:16:09.679 "zoned": false, 00:16:09.679 "supported_io_types": { 00:16:09.679 "read": true, 00:16:09.679 "write": true, 00:16:09.679 "unmap": true, 00:16:09.679 "flush": true, 00:16:09.679 "reset": true, 00:16:09.679 "nvme_admin": true, 00:16:09.679 "nvme_io": true, 00:16:09.679 "nvme_io_md": false, 00:16:09.679 "write_zeroes": true, 00:16:09.679 "zcopy": false, 00:16:09.679 "get_zone_info": false, 00:16:09.679 "zone_management": false, 00:16:09.679 "zone_append": false, 00:16:09.679 "compare": true, 00:16:09.679 "compare_and_write": false, 00:16:09.679 "abort": true, 00:16:09.679 "seek_hole": false, 00:16:09.679 "seek_data": false, 00:16:09.679 "copy": true, 00:16:09.679 "nvme_iov_md": false 00:16:09.679 }, 00:16:09.679 "driver_specific": { 00:16:09.679 "nvme": [ 00:16:09.679 { 00:16:09.679 "pci_address": "0000:00:11.0", 00:16:09.679 "trid": { 00:16:09.679 "trtype": "PCIe", 00:16:09.679 "traddr": "0000:00:11.0" 00:16:09.679 }, 00:16:09.679 "ctrlr_data": { 00:16:09.679 "cntlid": 0, 00:16:09.679 "vendor_id": "0x1b36", 00:16:09.679 "model_number": "QEMU NVMe Ctrl", 00:16:09.679 "serial_number": "12341", 00:16:09.679 "firmware_revision": "8.0.0", 00:16:09.679 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:09.679 "oacs": { 00:16:09.679 "security": 0, 00:16:09.680 "format": 1, 00:16:09.680 "firmware": 0, 00:16:09.680 "ns_manage": 1 00:16:09.680 }, 00:16:09.680 "multi_ctrlr": false, 00:16:09.680 "ana_reporting": false 00:16:09.680 }, 00:16:09.680 "vs": { 00:16:09.680 "nvme_version": "1.4" 00:16:09.680 }, 00:16:09.680 "ns_data": { 00:16:09.680 "id": 1, 00:16:09.680 "can_share": false 00:16:09.680 } 00:16:09.680 } 00:16:09.680 ], 00:16:09.680 "mp_policy": "active_passive" 00:16:09.680 } 00:16:09.680 } 00:16:09.680 ]' 00:16:09.680 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:09.680 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:09.680 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:09.680 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:09.680 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:09.680 01:29:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:09.680 01:29:05 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:09.680 01:29:05 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:09.680 01:29:05 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:09.680 01:29:05 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:09.680 01:29:05 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:09.941 01:29:05 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=9df6c3d8-ace2-4dd2-b909-d4cc4903abf8 00:16:09.941 01:29:05 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:09.941 01:29:05 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9df6c3d8-ace2-4dd2-b909-d4cc4903abf8 00:16:10.202 01:29:06 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:10.463 01:29:06 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=3534499c-7045-4bf4-ac75-be921f7584a0 00:16:10.463 01:29:06 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3534499c-7045-4bf4-ac75-be921f7584a0 00:16:10.724 01:29:06 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=81759933-019f-485c-a035-884ded5ccca8 00:16:10.724 01:29:06 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 81759933-019f-485c-a035-884ded5ccca8 00:16:10.724 01:29:06 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:10.724 01:29:06 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:10.724 01:29:06 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=81759933-019f-485c-a035-884ded5ccca8 00:16:10.724 01:29:06 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:10.724 01:29:06 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 81759933-019f-485c-a035-884ded5ccca8 00:16:10.724 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=81759933-019f-485c-a035-884ded5ccca8 00:16:10.724 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:10.724 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:10.724 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:10.724 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81759933-019f-485c-a035-884ded5ccca8 00:16:10.983 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:10.983 { 00:16:10.983 "name": "81759933-019f-485c-a035-884ded5ccca8", 00:16:10.983 "aliases": [ 00:16:10.983 "lvs/nvme0n1p0" 00:16:10.983 ], 00:16:10.983 "product_name": "Logical Volume", 00:16:10.983 "block_size": 4096, 00:16:10.983 "num_blocks": 26476544, 00:16:10.983 "uuid": "81759933-019f-485c-a035-884ded5ccca8", 00:16:10.983 "assigned_rate_limits": { 00:16:10.983 "rw_ios_per_sec": 0, 00:16:10.983 "rw_mbytes_per_sec": 0, 00:16:10.983 "r_mbytes_per_sec": 0, 00:16:10.983 "w_mbytes_per_sec": 0 00:16:10.983 }, 00:16:10.983 "claimed": false, 00:16:10.983 "zoned": false, 00:16:10.983 "supported_io_types": { 00:16:10.983 "read": true, 00:16:10.983 "write": true, 00:16:10.983 "unmap": true, 00:16:10.983 "flush": false, 00:16:10.983 "reset": true, 00:16:10.983 "nvme_admin": false, 00:16:10.983 "nvme_io": false, 00:16:10.983 "nvme_io_md": false, 00:16:10.983 "write_zeroes": true, 00:16:10.983 "zcopy": false, 00:16:10.983 "get_zone_info": false, 00:16:10.983 "zone_management": false, 00:16:10.983 "zone_append": false, 00:16:10.983 "compare": false, 00:16:10.983 "compare_and_write": false, 00:16:10.983 "abort": false, 00:16:10.983 "seek_hole": true, 00:16:10.983 "seek_data": true, 00:16:10.983 "copy": false, 00:16:10.983 "nvme_iov_md": false 00:16:10.983 }, 00:16:10.983 "driver_specific": { 00:16:10.983 "lvol": { 00:16:10.983 "lvol_store_uuid": "3534499c-7045-4bf4-ac75-be921f7584a0", 00:16:10.983 "base_bdev": "nvme0n1", 00:16:10.984 "thin_provision": true, 00:16:10.984 "num_allocated_clusters": 0, 00:16:10.984 "snapshot": false, 00:16:10.984 "clone": false, 00:16:10.984 "esnap_clone": false 00:16:10.984 } 00:16:10.984 } 00:16:10.984 } 00:16:10.984 ]' 00:16:10.984 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:10.984 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:10.984 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:10.984 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:10.984 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:10.984 01:29:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:10.984 01:29:06 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:10.984 01:29:06 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:10.984 01:29:06 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:11.241 01:29:07 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:11.241 01:29:07 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:11.241 01:29:07 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 81759933-019f-485c-a035-884ded5ccca8 00:16:11.242 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=81759933-019f-485c-a035-884ded5ccca8 00:16:11.242 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:11.242 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:11.242 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:11.242 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81759933-019f-485c-a035-884ded5ccca8 00:16:11.500 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:11.500 { 00:16:11.500 "name": "81759933-019f-485c-a035-884ded5ccca8", 00:16:11.500 "aliases": [ 00:16:11.500 "lvs/nvme0n1p0" 00:16:11.500 ], 00:16:11.500 "product_name": "Logical Volume", 00:16:11.500 "block_size": 4096, 00:16:11.500 "num_blocks": 26476544, 00:16:11.500 "uuid": "81759933-019f-485c-a035-884ded5ccca8", 00:16:11.500 "assigned_rate_limits": { 00:16:11.500 "rw_ios_per_sec": 0, 00:16:11.500 "rw_mbytes_per_sec": 0, 00:16:11.500 "r_mbytes_per_sec": 0, 00:16:11.500 "w_mbytes_per_sec": 0 00:16:11.500 }, 00:16:11.500 "claimed": false, 00:16:11.500 "zoned": false, 00:16:11.500 "supported_io_types": { 00:16:11.500 "read": true, 00:16:11.500 "write": true, 00:16:11.500 "unmap": true, 00:16:11.500 "flush": false, 00:16:11.500 "reset": true, 00:16:11.500 "nvme_admin": false, 00:16:11.500 "nvme_io": false, 00:16:11.500 "nvme_io_md": false, 00:16:11.500 "write_zeroes": true, 00:16:11.500 "zcopy": false, 00:16:11.500 "get_zone_info": false, 00:16:11.500 "zone_management": false, 00:16:11.500 "zone_append": false, 00:16:11.500 "compare": false, 00:16:11.500 "compare_and_write": false, 00:16:11.500 "abort": false, 00:16:11.500 "seek_hole": true, 00:16:11.500 "seek_data": true, 00:16:11.500 "copy": false, 00:16:11.500 "nvme_iov_md": false 00:16:11.500 }, 00:16:11.500 "driver_specific": { 00:16:11.500 "lvol": { 00:16:11.500 "lvol_store_uuid": "3534499c-7045-4bf4-ac75-be921f7584a0", 00:16:11.500 "base_bdev": "nvme0n1", 00:16:11.500 "thin_provision": true, 00:16:11.500 "num_allocated_clusters": 0, 00:16:11.500 "snapshot": false, 00:16:11.500 "clone": false, 00:16:11.500 "esnap_clone": false 00:16:11.500 } 00:16:11.500 } 00:16:11.500 } 00:16:11.500 ]' 00:16:11.500 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:11.500 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:11.500 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:11.500 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:11.500 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:11.500 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:11.500 01:29:07 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:11.500 01:29:07 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:11.758 01:29:07 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:11.758 01:29:07 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:11.758 01:29:07 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 81759933-019f-485c-a035-884ded5ccca8 00:16:11.758 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=81759933-019f-485c-a035-884ded5ccca8 00:16:11.758 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:11.758 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:11.758 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:11.758 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81759933-019f-485c-a035-884ded5ccca8 00:16:11.758 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:11.758 { 00:16:11.758 "name": "81759933-019f-485c-a035-884ded5ccca8", 00:16:11.758 "aliases": [ 00:16:11.758 "lvs/nvme0n1p0" 00:16:11.758 ], 00:16:11.758 "product_name": "Logical Volume", 00:16:11.758 "block_size": 4096, 00:16:11.758 "num_blocks": 26476544, 00:16:11.758 "uuid": "81759933-019f-485c-a035-884ded5ccca8", 00:16:11.758 "assigned_rate_limits": { 00:16:11.758 "rw_ios_per_sec": 0, 00:16:11.758 "rw_mbytes_per_sec": 0, 00:16:11.758 "r_mbytes_per_sec": 0, 00:16:11.758 "w_mbytes_per_sec": 0 00:16:11.758 }, 00:16:11.758 "claimed": false, 00:16:11.758 "zoned": false, 00:16:11.758 "supported_io_types": { 00:16:11.758 "read": true, 00:16:11.758 "write": true, 00:16:11.758 "unmap": true, 00:16:11.758 "flush": false, 00:16:11.758 "reset": true, 00:16:11.758 "nvme_admin": false, 00:16:11.758 "nvme_io": false, 00:16:11.758 "nvme_io_md": false, 00:16:11.758 "write_zeroes": true, 00:16:11.758 "zcopy": false, 00:16:11.758 "get_zone_info": false, 00:16:11.758 "zone_management": false, 00:16:11.758 "zone_append": false, 00:16:11.758 "compare": false, 00:16:11.758 "compare_and_write": false, 00:16:11.758 "abort": false, 00:16:11.758 "seek_hole": true, 00:16:11.758 "seek_data": true, 00:16:11.758 "copy": false, 00:16:11.758 "nvme_iov_md": false 00:16:11.758 }, 00:16:11.758 "driver_specific": { 00:16:11.758 "lvol": { 00:16:11.758 "lvol_store_uuid": "3534499c-7045-4bf4-ac75-be921f7584a0", 00:16:11.758 "base_bdev": "nvme0n1", 00:16:11.758 "thin_provision": true, 00:16:11.758 "num_allocated_clusters": 0, 00:16:11.758 "snapshot": false, 00:16:11.758 "clone": false, 00:16:11.758 "esnap_clone": false 00:16:11.758 } 00:16:11.758 } 00:16:11.758 } 00:16:11.758 ]' 00:16:11.758 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:12.017 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:12.017 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:12.017 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:12.017 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:12.017 01:29:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:12.017 01:29:07 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:12.017 01:29:07 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 81759933-019f-485c-a035-884ded5ccca8 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:12.017 [2024-09-28 01:29:07.909107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.909142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:12.017 [2024-09-28 01:29:07.909155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:12.017 [2024-09-28 01:29:07.909164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.911388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.911496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:12.017 [2024-09-28 01:29:07.911512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.202 ms 00:16:12.017 [2024-09-28 01:29:07.911518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.911593] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:12.017 [2024-09-28 01:29:07.912116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:12.017 [2024-09-28 01:29:07.912134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.912140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:12.017 [2024-09-28 01:29:07.912149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:16:12.017 [2024-09-28 01:29:07.912156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.912254] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d94572d0-4efc-4bf8-b84f-0a6faae4c084 00:16:12.017 [2024-09-28 01:29:07.913204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.913225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:12.017 [2024-09-28 01:29:07.913233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:12.017 [2024-09-28 01:29:07.913241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.918083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.918172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:12.017 [2024-09-28 01:29:07.918223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:16:12.017 [2024-09-28 01:29:07.918243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.918352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.918410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:12.017 [2024-09-28 01:29:07.918448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:12.017 [2024-09-28 01:29:07.918469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.918504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.918525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:12.017 [2024-09-28 01:29:07.918543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:12.017 [2024-09-28 01:29:07.918560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.918592] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:12.017 [2024-09-28 01:29:07.921580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.921662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:12.017 [2024-09-28 01:29:07.921723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.991 ms 00:16:12.017 [2024-09-28 01:29:07.921745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.921792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.921847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:12.017 [2024-09-28 01:29:07.921870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:12.017 [2024-09-28 01:29:07.921887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.921942] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:12.017 [2024-09-28 01:29:07.922061] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:12.017 [2024-09-28 01:29:07.922146] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:12.017 [2024-09-28 01:29:07.922183] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:12.017 [2024-09-28 01:29:07.922225] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:12.017 [2024-09-28 01:29:07.922281] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:12.017 [2024-09-28 01:29:07.922310] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:12.017 [2024-09-28 01:29:07.922326] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:12.017 [2024-09-28 01:29:07.922342] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:12.017 [2024-09-28 01:29:07.922357] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:12.017 [2024-09-28 01:29:07.922405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.922427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:12.017 [2024-09-28 01:29:07.922514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:16:12.017 [2024-09-28 01:29:07.922534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.922631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.017 [2024-09-28 01:29:07.922654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:12.017 [2024-09-28 01:29:07.922740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:12.017 [2024-09-28 01:29:07.922758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.017 [2024-09-28 01:29:07.922856] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:12.017 [2024-09-28 01:29:07.922877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:12.017 [2024-09-28 01:29:07.922923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.017 [2024-09-28 01:29:07.922943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.017 [2024-09-28 01:29:07.922960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:12.017 [2024-09-28 01:29:07.922974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:12.017 [2024-09-28 01:29:07.923014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:12.017 [2024-09-28 01:29:07.923030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:12.017 [2024-09-28 01:29:07.923048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:12.017 [2024-09-28 01:29:07.923062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.017 [2024-09-28 01:29:07.923077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:12.017 [2024-09-28 01:29:07.923091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:12.017 [2024-09-28 01:29:07.923145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.017 [2024-09-28 01:29:07.923166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:12.017 [2024-09-28 01:29:07.923182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:12.017 [2024-09-28 01:29:07.923205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.017 [2024-09-28 01:29:07.923224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:12.017 [2024-09-28 01:29:07.923238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:12.017 [2024-09-28 01:29:07.923253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.017 [2024-09-28 01:29:07.923295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:12.017 [2024-09-28 01:29:07.923342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:12.017 [2024-09-28 01:29:07.923377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.017 [2024-09-28 01:29:07.923396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:12.017 [2024-09-28 01:29:07.923410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:12.017 [2024-09-28 01:29:07.923426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.017 [2024-09-28 01:29:07.923440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:12.017 [2024-09-28 01:29:07.923455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:12.017 [2024-09-28 01:29:07.923498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.017 [2024-09-28 01:29:07.923518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:12.017 [2024-09-28 01:29:07.923533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:12.017 [2024-09-28 01:29:07.923548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:12.017 [2024-09-28 01:29:07.923563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:12.018 [2024-09-28 01:29:07.923579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:12.018 [2024-09-28 01:29:07.923620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.018 [2024-09-28 01:29:07.923639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:12.018 [2024-09-28 01:29:07.923654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:12.018 [2024-09-28 01:29:07.923669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.018 [2024-09-28 01:29:07.923708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:12.018 [2024-09-28 01:29:07.923727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:12.018 [2024-09-28 01:29:07.923741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.018 [2024-09-28 01:29:07.923757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:12.018 [2024-09-28 01:29:07.923770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:12.018 [2024-09-28 01:29:07.923817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.018 [2024-09-28 01:29:07.923837] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:12.018 [2024-09-28 01:29:07.923854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:12.018 [2024-09-28 01:29:07.923870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.018 [2024-09-28 01:29:07.923887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.018 [2024-09-28 01:29:07.923902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:12.018 [2024-09-28 01:29:07.923918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:12.018 [2024-09-28 01:29:07.923932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:12.018 [2024-09-28 01:29:07.923948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:12.018 [2024-09-28 01:29:07.923962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:12.018 [2024-09-28 01:29:07.923977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:12.018 [2024-09-28 01:29:07.923994] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:12.018 [2024-09-28 01:29:07.924020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.018 [2024-09-28 01:29:07.924082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:12.018 [2024-09-28 01:29:07.924111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:12.018 [2024-09-28 01:29:07.924137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:12.018 [2024-09-28 01:29:07.924163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:12.018 [2024-09-28 01:29:07.924188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:12.018 [2024-09-28 01:29:07.924224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:12.018 [2024-09-28 01:29:07.924250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:12.018 [2024-09-28 01:29:07.924309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:12.018 [2024-09-28 01:29:07.924337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:12.018 [2024-09-28 01:29:07.924365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:12.018 [2024-09-28 01:29:07.924391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:12.018 [2024-09-28 01:29:07.924414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:12.018 [2024-09-28 01:29:07.924439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:12.018 [2024-09-28 01:29:07.924496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:12.018 [2024-09-28 01:29:07.924522] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:12.018 [2024-09-28 01:29:07.924548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.018 [2024-09-28 01:29:07.924573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:12.018 [2024-09-28 01:29:07.924600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:12.018 [2024-09-28 01:29:07.924722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:12.018 [2024-09-28 01:29:07.924748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:12.018 [2024-09-28 01:29:07.924774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.018 [2024-09-28 01:29:07.924798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:12.018 [2024-09-28 01:29:07.924813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.972 ms 00:16:12.018 [2024-09-28 01:29:07.924859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.018 [2024-09-28 01:29:07.924930] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:12.018 [2024-09-28 01:29:07.925039] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:14.553 [2024-09-28 01:29:10.100399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 01:29:10.100597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:14.553 [2024-09-28 01:29:10.100676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2175.459 ms 00:16:14.553 [2024-09-28 01:29:10.100705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 01:29:10.139255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 01:29:10.139458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.553 [2024-09-28 01:29:10.139605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.191 ms 00:16:14.553 [2024-09-28 01:29:10.139651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 01:29:10.139919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 01:29:10.140023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:14.553 [2024-09-28 01:29:10.140092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:14.553 [2024-09-28 01:29:10.140124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 01:29:10.171470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 01:29:10.171596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.553 [2024-09-28 01:29:10.171657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.262 ms 00:16:14.553 [2024-09-28 01:29:10.171682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 01:29:10.171763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 01:29:10.172175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.553 [2024-09-28 01:29:10.172239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:14.553 [2024-09-28 01:29:10.172264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 01:29:10.172726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 01:29:10.172755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.553 [2024-09-28 01:29:10.172765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:16:14.553 [2024-09-28 01:29:10.172774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 01:29:10.172940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 01:29:10.172960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.553 [2024-09-28 01:29:10.172979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:16:14.553 [2024-09-28 01:29:10.172995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 01:29:10.187493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 01:29:10.187522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.553 [2024-09-28 01:29:10.187534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.468 ms 00:16:14.553 [2024-09-28 01:29:10.187543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.553 [2024-09-28 01:29:10.199024] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:14.553 [2024-09-28 01:29:10.213336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.553 [2024-09-28 01:29:10.213365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:14.553 [2024-09-28 01:29:10.213378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.693 ms 00:16:14.554 [2024-09-28 01:29:10.213386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.275080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.275122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:14.554 [2024-09-28 01:29:10.275138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.626 ms 00:16:14.554 [2024-09-28 01:29:10.275147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.275366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.275382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:14.554 [2024-09-28 01:29:10.275397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:16:14.554 [2024-09-28 01:29:10.275405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.299104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.299136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:14.554 [2024-09-28 01:29:10.299150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.672 ms 00:16:14.554 [2024-09-28 01:29:10.299157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.322052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.322081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:14.554 [2024-09-28 01:29:10.322094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.829 ms 00:16:14.554 [2024-09-28 01:29:10.322101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.322691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.322711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:14.554 [2024-09-28 01:29:10.322722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:16:14.554 [2024-09-28 01:29:10.322730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.388430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.388564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:14.554 [2024-09-28 01:29:10.388588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.665 ms 00:16:14.554 [2024-09-28 01:29:10.388597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.413126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.413161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:14.554 [2024-09-28 01:29:10.413174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.444 ms 00:16:14.554 [2024-09-28 01:29:10.413182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.436446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.436476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:14.554 [2024-09-28 01:29:10.436489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.196 ms 00:16:14.554 [2024-09-28 01:29:10.436497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.460164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.460296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:14.554 [2024-09-28 01:29:10.460316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.598 ms 00:16:14.554 [2024-09-28 01:29:10.460323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.460381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.460395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:14.554 [2024-09-28 01:29:10.460408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:14.554 [2024-09-28 01:29:10.460428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.460502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.554 [2024-09-28 01:29:10.460515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:14.554 [2024-09-28 01:29:10.460526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:14.554 [2024-09-28 01:29:10.460533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.554 [2024-09-28 01:29:10.461377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:14.554 { 00:16:14.554 "name": "ftl0", 00:16:14.554 "uuid": "d94572d0-4efc-4bf8-b84f-0a6faae4c084" 00:16:14.554 } 00:16:14.554 [2024-09-28 01:29:10.464228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2551.918 ms, result 0 00:16:14.554 [2024-09-28 01:29:10.465067] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:14.811 01:29:10 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:14.811 01:29:10 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:14.811 01:29:10 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:14.811 01:29:10 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:14.811 01:29:10 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:14.811 01:29:10 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:14.811 01:29:10 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:14.811 01:29:10 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:15.068 [ 00:16:15.068 { 00:16:15.068 "name": "ftl0", 00:16:15.068 "aliases": [ 00:16:15.068 "d94572d0-4efc-4bf8-b84f-0a6faae4c084" 00:16:15.068 ], 00:16:15.068 "product_name": "FTL disk", 00:16:15.068 "block_size": 4096, 00:16:15.068 "num_blocks": 23592960, 00:16:15.068 "uuid": "d94572d0-4efc-4bf8-b84f-0a6faae4c084", 00:16:15.068 "assigned_rate_limits": { 00:16:15.068 "rw_ios_per_sec": 0, 00:16:15.068 "rw_mbytes_per_sec": 0, 00:16:15.068 "r_mbytes_per_sec": 0, 00:16:15.068 "w_mbytes_per_sec": 0 00:16:15.068 }, 00:16:15.068 "claimed": false, 00:16:15.068 "zoned": false, 00:16:15.068 "supported_io_types": { 00:16:15.068 "read": true, 00:16:15.068 "write": true, 00:16:15.068 "unmap": true, 00:16:15.068 "flush": true, 00:16:15.068 "reset": false, 00:16:15.068 "nvme_admin": false, 00:16:15.068 "nvme_io": false, 00:16:15.068 "nvme_io_md": false, 00:16:15.068 "write_zeroes": true, 00:16:15.068 "zcopy": false, 00:16:15.068 "get_zone_info": false, 00:16:15.068 "zone_management": false, 00:16:15.068 "zone_append": false, 00:16:15.068 "compare": false, 00:16:15.068 "compare_and_write": false, 00:16:15.068 "abort": false, 00:16:15.068 "seek_hole": false, 00:16:15.068 "seek_data": false, 00:16:15.068 "copy": false, 00:16:15.068 "nvme_iov_md": false 00:16:15.068 }, 00:16:15.068 "driver_specific": { 00:16:15.068 "ftl": { 00:16:15.068 "base_bdev": "81759933-019f-485c-a035-884ded5ccca8", 00:16:15.068 "cache": "nvc0n1p0" 00:16:15.068 } 00:16:15.068 } 00:16:15.068 } 00:16:15.068 ] 00:16:15.068 01:29:10 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:15.068 01:29:10 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:15.068 01:29:10 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:15.326 01:29:11 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:15.326 01:29:11 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:15.583 01:29:11 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:15.583 { 00:16:15.583 "name": "ftl0", 00:16:15.583 "aliases": [ 00:16:15.583 "d94572d0-4efc-4bf8-b84f-0a6faae4c084" 00:16:15.583 ], 00:16:15.583 "product_name": "FTL disk", 00:16:15.583 "block_size": 4096, 00:16:15.583 "num_blocks": 23592960, 00:16:15.583 "uuid": "d94572d0-4efc-4bf8-b84f-0a6faae4c084", 00:16:15.583 "assigned_rate_limits": { 00:16:15.583 "rw_ios_per_sec": 0, 00:16:15.583 "rw_mbytes_per_sec": 0, 00:16:15.583 "r_mbytes_per_sec": 0, 00:16:15.583 "w_mbytes_per_sec": 0 00:16:15.583 }, 00:16:15.583 "claimed": false, 00:16:15.583 "zoned": false, 00:16:15.583 "supported_io_types": { 00:16:15.583 "read": true, 00:16:15.583 "write": true, 00:16:15.583 "unmap": true, 00:16:15.583 "flush": true, 00:16:15.583 "reset": false, 00:16:15.583 "nvme_admin": false, 00:16:15.583 "nvme_io": false, 00:16:15.583 "nvme_io_md": false, 00:16:15.583 "write_zeroes": true, 00:16:15.583 "zcopy": false, 00:16:15.583 "get_zone_info": false, 00:16:15.583 "zone_management": false, 00:16:15.583 "zone_append": false, 00:16:15.583 "compare": false, 00:16:15.583 "compare_and_write": false, 00:16:15.583 "abort": false, 00:16:15.583 "seek_hole": false, 00:16:15.583 "seek_data": false, 00:16:15.583 "copy": false, 00:16:15.583 "nvme_iov_md": false 00:16:15.583 }, 00:16:15.583 "driver_specific": { 00:16:15.583 "ftl": { 00:16:15.583 "base_bdev": "81759933-019f-485c-a035-884ded5ccca8", 00:16:15.583 "cache": "nvc0n1p0" 00:16:15.583 } 00:16:15.583 } 00:16:15.583 } 00:16:15.583 ]' 00:16:15.583 01:29:11 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:15.583 01:29:11 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:15.583 01:29:11 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:15.583 [2024-09-28 01:29:11.496587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.583 [2024-09-28 01:29:11.496628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:15.583 [2024-09-28 01:29:11.496640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:15.583 [2024-09-28 01:29:11.496651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.583 [2024-09-28 01:29:11.496679] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:15.583 [2024-09-28 01:29:11.499268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.583 [2024-09-28 01:29:11.499402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:15.583 [2024-09-28 01:29:11.499425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.571 ms 00:16:15.583 [2024-09-28 01:29:11.499433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.583 [2024-09-28 01:29:11.499931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.583 [2024-09-28 01:29:11.499951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:15.583 [2024-09-28 01:29:11.499961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:16:15.583 [2024-09-28 01:29:11.499969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.583 [2024-09-28 01:29:11.503764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.583 [2024-09-28 01:29:11.503837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:15.583 [2024-09-28 01:29:11.503901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.766 ms 00:16:15.584 [2024-09-28 01:29:11.503928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.584 [2024-09-28 01:29:11.511062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.584 [2024-09-28 01:29:11.511160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:15.584 [2024-09-28 01:29:11.511239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.077 ms 00:16:15.584 [2024-09-28 01:29:11.511266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.842 [2024-09-28 01:29:11.535126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.842 [2024-09-28 01:29:11.535250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:15.842 [2024-09-28 01:29:11.535315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.756 ms 00:16:15.842 [2024-09-28 01:29:11.535342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.842 [2024-09-28 01:29:11.550390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.842 [2024-09-28 01:29:11.550507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:15.842 [2024-09-28 01:29:11.550568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.979 ms 00:16:15.842 [2024-09-28 01:29:11.550595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.842 [2024-09-28 01:29:11.550860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.842 [2024-09-28 01:29:11.550926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:15.842 [2024-09-28 01:29:11.550979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:16:15.842 [2024-09-28 01:29:11.551005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.842 [2024-09-28 01:29:11.574640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.842 [2024-09-28 01:29:11.574745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:15.842 [2024-09-28 01:29:11.574803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.564 ms 00:16:15.842 [2024-09-28 01:29:11.574829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.842 [2024-09-28 01:29:11.597810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.842 [2024-09-28 01:29:11.597914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:15.842 [2024-09-28 01:29:11.597973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.858 ms 00:16:15.842 [2024-09-28 01:29:11.597999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.842 [2024-09-28 01:29:11.621388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.842 [2024-09-28 01:29:11.621492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:15.842 [2024-09-28 01:29:11.621549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.242 ms 00:16:15.842 [2024-09-28 01:29:11.621576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.842 [2024-09-28 01:29:11.643911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.842 [2024-09-28 01:29:11.644014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:15.842 [2024-09-28 01:29:11.644069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.195 ms 00:16:15.842 [2024-09-28 01:29:11.644094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.842 [2024-09-28 01:29:11.644238] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:15.842 [2024-09-28 01:29:11.644277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.644362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.644397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.644458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.644522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.644563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.644619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.644655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.644688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.644832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.644885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.645961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.646998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.647031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.647097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.647132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.647167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:15.842 [2024-09-28 01:29:11.647209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.647967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:15.843 [2024-09-28 01:29:11.648275] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:15.843 [2024-09-28 01:29:11.648286] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94572d0-4efc-4bf8-b84f-0a6faae4c084 00:16:15.843 [2024-09-28 01:29:11.648294] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:15.843 [2024-09-28 01:29:11.648303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:15.843 [2024-09-28 01:29:11.648311] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:15.843 [2024-09-28 01:29:11.648320] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:15.843 [2024-09-28 01:29:11.648327] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:15.843 [2024-09-28 01:29:11.648336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:15.843 [2024-09-28 01:29:11.648343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:15.843 [2024-09-28 01:29:11.648351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:15.843 [2024-09-28 01:29:11.648358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:15.843 [2024-09-28 01:29:11.648367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.843 [2024-09-28 01:29:11.648374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:15.843 [2024-09-28 01:29:11.648384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.131 ms 00:16:15.843 [2024-09-28 01:29:11.648393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.843 [2024-09-28 01:29:11.661065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.843 [2024-09-28 01:29:11.661161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:15.843 [2024-09-28 01:29:11.661248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.614 ms 00:16:15.843 [2024-09-28 01:29:11.661335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.843 [2024-09-28 01:29:11.661726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:15.843 [2024-09-28 01:29:11.661808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:15.843 [2024-09-28 01:29:11.661868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:16:15.843 [2024-09-28 01:29:11.661916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.843 [2024-09-28 01:29:11.706268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:15.843 [2024-09-28 01:29:11.706379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:15.843 [2024-09-28 01:29:11.706437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:15.843 [2024-09-28 01:29:11.706464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.843 [2024-09-28 01:29:11.706585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:15.843 [2024-09-28 01:29:11.706645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:15.843 [2024-09-28 01:29:11.706678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:15.843 [2024-09-28 01:29:11.706763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.843 [2024-09-28 01:29:11.706844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:15.843 [2024-09-28 01:29:11.706871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:15.843 [2024-09-28 01:29:11.706947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:15.843 [2024-09-28 01:29:11.707002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:15.843 [2024-09-28 01:29:11.707055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:15.843 [2024-09-28 01:29:11.707079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:15.843 [2024-09-28 01:29:11.707105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:15.843 [2024-09-28 01:29:11.707271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.100 [2024-09-28 01:29:11.788690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.100 [2024-09-28 01:29:11.788870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:16.100 [2024-09-28 01:29:11.788948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.100 [2024-09-28 01:29:11.788977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.100 [2024-09-28 01:29:11.853039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.100 [2024-09-28 01:29:11.853182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:16.100 [2024-09-28 01:29:11.853249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.100 [2024-09-28 01:29:11.853280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.100 [2024-09-28 01:29:11.853397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.100 [2024-09-28 01:29:11.853469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:16.100 [2024-09-28 01:29:11.853501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.100 [2024-09-28 01:29:11.853524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.100 [2024-09-28 01:29:11.853629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.100 [2024-09-28 01:29:11.853687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:16.100 [2024-09-28 01:29:11.853751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.100 [2024-09-28 01:29:11.853777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.100 [2024-09-28 01:29:11.853916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.100 [2024-09-28 01:29:11.854055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:16.100 [2024-09-28 01:29:11.854085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.100 [2024-09-28 01:29:11.854104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.100 [2024-09-28 01:29:11.854187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.101 [2024-09-28 01:29:11.854316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:16.101 [2024-09-28 01:29:11.854342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.101 [2024-09-28 01:29:11.854362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.101 [2024-09-28 01:29:11.854431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.101 [2024-09-28 01:29:11.854497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:16.101 [2024-09-28 01:29:11.854554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.101 [2024-09-28 01:29:11.854573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.101 [2024-09-28 01:29:11.854640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.101 [2024-09-28 01:29:11.854666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:16.101 [2024-09-28 01:29:11.854687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.101 [2024-09-28 01:29:11.854754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.101 [2024-09-28 01:29:11.854967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 358.358 ms, result 0 00:16:16.101 true 00:16:16.101 01:29:11 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73914 00:16:16.101 01:29:11 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73914 ']' 00:16:16.101 01:29:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73914 00:16:16.101 01:29:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:16.101 01:29:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.101 01:29:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73914 00:16:16.101 killing process with pid 73914 00:16:16.101 01:29:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:16.101 01:29:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:16.101 01:29:11 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73914' 00:16:16.101 01:29:11 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73914 00:16:16.101 01:29:11 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73914 00:16:22.658 01:29:17 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:23.231 65536+0 records in 00:16:23.231 65536+0 records out 00:16:23.231 268435456 bytes (268 MB, 256 MiB) copied, 1.07059 s, 251 MB/s 00:16:23.231 01:29:19 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:23.231 [2024-09-28 01:29:19.074793] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:23.231 [2024-09-28 01:29:19.074918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74097 ] 00:16:23.491 [2024-09-28 01:29:19.240460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.751 [2024-09-28 01:29:19.431526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.013 [2024-09-28 01:29:19.691801] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.013 [2024-09-28 01:29:19.691887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.013 [2024-09-28 01:29:19.855425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.013 [2024-09-28 01:29:19.855512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:24.013 [2024-09-28 01:29:19.855532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:24.013 [2024-09-28 01:29:19.855542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.013 [2024-09-28 01:29:19.859299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.013 [2024-09-28 01:29:19.859362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:24.013 [2024-09-28 01:29:19.859375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.733 ms 00:16:24.013 [2024-09-28 01:29:19.859387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.013 [2024-09-28 01:29:19.859544] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:24.013 [2024-09-28 01:29:19.860759] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:24.013 [2024-09-28 01:29:19.860830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.013 [2024-09-28 01:29:19.860845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:24.013 [2024-09-28 01:29:19.860856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.297 ms 00:16:24.013 [2024-09-28 01:29:19.860864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.013 [2024-09-28 01:29:19.862739] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:24.013 [2024-09-28 01:29:19.876984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.013 [2024-09-28 01:29:19.877040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:24.013 [2024-09-28 01:29:19.877056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.247 ms 00:16:24.013 [2024-09-28 01:29:19.877065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.013 [2024-09-28 01:29:19.877218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.013 [2024-09-28 01:29:19.877233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:24.013 [2024-09-28 01:29:19.877247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:24.013 [2024-09-28 01:29:19.877256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.013 [2024-09-28 01:29:19.885713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.013 [2024-09-28 01:29:19.885760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:24.013 [2024-09-28 01:29:19.885771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.407 ms 00:16:24.013 [2024-09-28 01:29:19.885779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.013 [2024-09-28 01:29:19.885899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.013 [2024-09-28 01:29:19.885913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:24.013 [2024-09-28 01:29:19.885922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:24.013 [2024-09-28 01:29:19.885930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.014 [2024-09-28 01:29:19.885959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.014 [2024-09-28 01:29:19.885968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:24.014 [2024-09-28 01:29:19.885977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:24.014 [2024-09-28 01:29:19.885985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.014 [2024-09-28 01:29:19.886009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:24.014 [2024-09-28 01:29:19.890245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.014 [2024-09-28 01:29:19.890288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:24.014 [2024-09-28 01:29:19.890307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.243 ms 00:16:24.014 [2024-09-28 01:29:19.890315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.014 [2024-09-28 01:29:19.890395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.014 [2024-09-28 01:29:19.890410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:24.014 [2024-09-28 01:29:19.890420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:24.014 [2024-09-28 01:29:19.890429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.014 [2024-09-28 01:29:19.890454] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:24.014 [2024-09-28 01:29:19.890475] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:24.014 [2024-09-28 01:29:19.890513] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:24.014 [2024-09-28 01:29:19.890531] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:24.014 [2024-09-28 01:29:19.890641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:24.014 [2024-09-28 01:29:19.890653] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:24.014 [2024-09-28 01:29:19.890666] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:24.014 [2024-09-28 01:29:19.890677] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:24.014 [2024-09-28 01:29:19.890687] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:24.014 [2024-09-28 01:29:19.890696] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:24.014 [2024-09-28 01:29:19.890705] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:24.014 [2024-09-28 01:29:19.890713] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:24.014 [2024-09-28 01:29:19.890721] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:24.014 [2024-09-28 01:29:19.890730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.014 [2024-09-28 01:29:19.890741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:24.014 [2024-09-28 01:29:19.890749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:16:24.014 [2024-09-28 01:29:19.890758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.014 [2024-09-28 01:29:19.890847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.014 [2024-09-28 01:29:19.890857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:24.014 [2024-09-28 01:29:19.890865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:24.014 [2024-09-28 01:29:19.890874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.014 [2024-09-28 01:29:19.890973] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:24.014 [2024-09-28 01:29:19.890984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:24.014 [2024-09-28 01:29:19.890996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.014 [2024-09-28 01:29:19.891004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:24.014 [2024-09-28 01:29:19.891021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:24.014 [2024-09-28 01:29:19.891037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:24.014 [2024-09-28 01:29:19.891045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.014 [2024-09-28 01:29:19.891059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:24.014 [2024-09-28 01:29:19.891075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:24.014 [2024-09-28 01:29:19.891082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.014 [2024-09-28 01:29:19.891089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:24.014 [2024-09-28 01:29:19.891096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:24.014 [2024-09-28 01:29:19.891102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:24.014 [2024-09-28 01:29:19.891117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:24.014 [2024-09-28 01:29:19.891123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:24.014 [2024-09-28 01:29:19.891137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.014 [2024-09-28 01:29:19.891151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:24.014 [2024-09-28 01:29:19.891158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.014 [2024-09-28 01:29:19.891175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:24.014 [2024-09-28 01:29:19.891182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.014 [2024-09-28 01:29:19.891219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:24.014 [2024-09-28 01:29:19.891227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.014 [2024-09-28 01:29:19.891241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:24.014 [2024-09-28 01:29:19.891248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.014 [2024-09-28 01:29:19.891263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:24.014 [2024-09-28 01:29:19.891270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:24.014 [2024-09-28 01:29:19.891276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.014 [2024-09-28 01:29:19.891283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:24.014 [2024-09-28 01:29:19.891290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:24.014 [2024-09-28 01:29:19.891298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:24.014 [2024-09-28 01:29:19.891312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:24.014 [2024-09-28 01:29:19.891320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891327] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:24.014 [2024-09-28 01:29:19.891336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:24.014 [2024-09-28 01:29:19.891344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.014 [2024-09-28 01:29:19.891352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.014 [2024-09-28 01:29:19.891362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:24.014 [2024-09-28 01:29:19.891370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:24.014 [2024-09-28 01:29:19.891378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:24.014 [2024-09-28 01:29:19.891385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:24.014 [2024-09-28 01:29:19.891392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:24.014 [2024-09-28 01:29:19.891399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:24.014 [2024-09-28 01:29:19.891408] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:24.014 [2024-09-28 01:29:19.891422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.014 [2024-09-28 01:29:19.891431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:24.014 [2024-09-28 01:29:19.891439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:24.014 [2024-09-28 01:29:19.891448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:24.014 [2024-09-28 01:29:19.891454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:24.014 [2024-09-28 01:29:19.891462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:24.014 [2024-09-28 01:29:19.891470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:24.014 [2024-09-28 01:29:19.891478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:24.014 [2024-09-28 01:29:19.891485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:24.014 [2024-09-28 01:29:19.891492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:24.014 [2024-09-28 01:29:19.891499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:24.014 [2024-09-28 01:29:19.891507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:24.015 [2024-09-28 01:29:19.891515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:24.015 [2024-09-28 01:29:19.891522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:24.015 [2024-09-28 01:29:19.891529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:24.015 [2024-09-28 01:29:19.891537] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:24.015 [2024-09-28 01:29:19.891546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.015 [2024-09-28 01:29:19.891555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:24.015 [2024-09-28 01:29:19.891562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:24.015 [2024-09-28 01:29:19.891569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:24.015 [2024-09-28 01:29:19.891577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:24.015 [2024-09-28 01:29:19.891585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.015 [2024-09-28 01:29:19.891596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:24.015 [2024-09-28 01:29:19.891614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:16:24.015 [2024-09-28 01:29:19.891622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.015 [2024-09-28 01:29:19.933012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.015 [2024-09-28 01:29:19.933088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:24.015 [2024-09-28 01:29:19.933104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.330 ms 00:16:24.015 [2024-09-28 01:29:19.933114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.015 [2024-09-28 01:29:19.933337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.015 [2024-09-28 01:29:19.933361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:24.015 [2024-09-28 01:29:19.933371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:24.015 [2024-09-28 01:29:19.933379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.276 [2024-09-28 01:29:19.969024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.276 [2024-09-28 01:29:19.969080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:24.276 [2024-09-28 01:29:19.969094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.619 ms 00:16:24.276 [2024-09-28 01:29:19.969102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.276 [2024-09-28 01:29:19.969248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.276 [2024-09-28 01:29:19.969260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.276 [2024-09-28 01:29:19.969271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:24.276 [2024-09-28 01:29:19.969287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.276 [2024-09-28 01:29:19.969867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.276 [2024-09-28 01:29:19.969902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.276 [2024-09-28 01:29:19.969912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:16:24.276 [2024-09-28 01:29:19.969921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.276 [2024-09-28 01:29:19.970084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.276 [2024-09-28 01:29:19.970094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.276 [2024-09-28 01:29:19.970103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:16:24.276 [2024-09-28 01:29:19.970111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.276 [2024-09-28 01:29:19.985643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.276 [2024-09-28 01:29:19.985889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.276 [2024-09-28 01:29:19.985912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.508 ms 00:16:24.276 [2024-09-28 01:29:19.985920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.276 [2024-09-28 01:29:20.000632] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:24.276 [2024-09-28 01:29:20.000852] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:24.276 [2024-09-28 01:29:20.000883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.276 [2024-09-28 01:29:20.000893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:24.276 [2024-09-28 01:29:20.000902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.816 ms 00:16:24.276 [2024-09-28 01:29:20.000911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.276 [2024-09-28 01:29:20.027224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.276 [2024-09-28 01:29:20.027469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:24.277 [2024-09-28 01:29:20.027495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.216 ms 00:16:24.277 [2024-09-28 01:29:20.027512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.040883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.040942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:24.277 [2024-09-28 01:29:20.040955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.257 ms 00:16:24.277 [2024-09-28 01:29:20.040964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.053632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.053682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:24.277 [2024-09-28 01:29:20.053694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.569 ms 00:16:24.277 [2024-09-28 01:29:20.053701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.054433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.054460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:24.277 [2024-09-28 01:29:20.054471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:16:24.277 [2024-09-28 01:29:20.054479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.122077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.122149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:24.277 [2024-09-28 01:29:20.122166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.569 ms 00:16:24.277 [2024-09-28 01:29:20.122176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.134689] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:24.277 [2024-09-28 01:29:20.155218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.155287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:24.277 [2024-09-28 01:29:20.155302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.896 ms 00:16:24.277 [2024-09-28 01:29:20.155311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.155436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.155449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:24.277 [2024-09-28 01:29:20.155458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:24.277 [2024-09-28 01:29:20.155467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.155531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.155542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:24.277 [2024-09-28 01:29:20.155553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:24.277 [2024-09-28 01:29:20.155561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.155585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.155594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:24.277 [2024-09-28 01:29:20.155602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:24.277 [2024-09-28 01:29:20.155611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.155647] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:24.277 [2024-09-28 01:29:20.155658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.155666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:24.277 [2024-09-28 01:29:20.155675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:24.277 [2024-09-28 01:29:20.155685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.182238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.182297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:24.277 [2024-09-28 01:29:20.182311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.528 ms 00:16:24.277 [2024-09-28 01:29:20.182321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.182464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.277 [2024-09-28 01:29:20.182476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:24.277 [2024-09-28 01:29:20.182491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:24.277 [2024-09-28 01:29:20.182500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.277 [2024-09-28 01:29:20.183612] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:24.277 [2024-09-28 01:29:20.187116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.846 ms, result 0 00:16:24.277 [2024-09-28 01:29:20.188283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:24.277 [2024-09-28 01:29:20.202054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:31.892  Copying: 15/256 [MB] (15 MBps) Copying: 38/256 [MB] (23 MBps) Copying: 89/256 [MB] (50 MBps) Copying: 128/256 [MB] (38 MBps) Copying: 144/256 [MB] (16 MBps) Copying: 187/256 [MB] (43 MBps) Copying: 231/256 [MB] (43 MBps) Copying: 256/256 [MB] (average 33 MBps)[2024-09-28 01:29:27.755111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:31.892 [2024-09-28 01:29:27.762556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.892 [2024-09-28 01:29:27.762708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:31.892 [2024-09-28 01:29:27.762723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:31.892 [2024-09-28 01:29:27.762730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.892 [2024-09-28 01:29:27.762751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:31.892 [2024-09-28 01:29:27.764856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.892 [2024-09-28 01:29:27.764880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:31.892 [2024-09-28 01:29:27.764889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:16:31.892 [2024-09-28 01:29:27.764895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.892 [2024-09-28 01:29:27.766357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.892 [2024-09-28 01:29:27.766381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:31.892 [2024-09-28 01:29:27.766392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.444 ms 00:16:31.892 [2024-09-28 01:29:27.766399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.892 [2024-09-28 01:29:27.771969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.892 [2024-09-28 01:29:27.771994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:31.892 [2024-09-28 01:29:27.772002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.556 ms 00:16:31.892 [2024-09-28 01:29:27.772008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.892 [2024-09-28 01:29:27.777413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.892 [2024-09-28 01:29:27.777517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:31.892 [2024-09-28 01:29:27.777529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.380 ms 00:16:31.892 [2024-09-28 01:29:27.777539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.892 [2024-09-28 01:29:27.795379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.892 [2024-09-28 01:29:27.795409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:31.892 [2024-09-28 01:29:27.795418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.779 ms 00:16:31.892 [2024-09-28 01:29:27.795424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.892 [2024-09-28 01:29:27.806911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.892 [2024-09-28 01:29:27.806936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:31.892 [2024-09-28 01:29:27.806945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.458 ms 00:16:31.892 [2024-09-28 01:29:27.806952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.892 [2024-09-28 01:29:27.807047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.892 [2024-09-28 01:29:27.807054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:31.892 [2024-09-28 01:29:27.807061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:31.892 [2024-09-28 01:29:27.807066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.152 [2024-09-28 01:29:27.824729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.152 [2024-09-28 01:29:27.824868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:32.152 [2024-09-28 01:29:27.824881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.651 ms 00:16:32.152 [2024-09-28 01:29:27.824887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.152 [2024-09-28 01:29:27.842589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.152 [2024-09-28 01:29:27.842613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:32.152 [2024-09-28 01:29:27.842621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.669 ms 00:16:32.152 [2024-09-28 01:29:27.842626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.152 [2024-09-28 01:29:27.859977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.152 [2024-09-28 01:29:27.860074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:32.152 [2024-09-28 01:29:27.860086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.324 ms 00:16:32.152 [2024-09-28 01:29:27.860092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.152 [2024-09-28 01:29:27.877092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.152 [2024-09-28 01:29:27.877117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:32.152 [2024-09-28 01:29:27.877125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.955 ms 00:16:32.152 [2024-09-28 01:29:27.877130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.152 [2024-09-28 01:29:27.877157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:32.152 [2024-09-28 01:29:27.877168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:32.152 [2024-09-28 01:29:27.877422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:32.153 [2024-09-28 01:29:27.877772] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:32.153 [2024-09-28 01:29:27.877778] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94572d0-4efc-4bf8-b84f-0a6faae4c084 00:16:32.153 [2024-09-28 01:29:27.877784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:32.153 [2024-09-28 01:29:27.877789] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:32.153 [2024-09-28 01:29:27.877795] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:32.153 [2024-09-28 01:29:27.877801] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:32.153 [2024-09-28 01:29:27.877809] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:32.153 [2024-09-28 01:29:27.877815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:32.153 [2024-09-28 01:29:27.877820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:32.153 [2024-09-28 01:29:27.877825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:32.153 [2024-09-28 01:29:27.877830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:32.153 [2024-09-28 01:29:27.877835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.153 [2024-09-28 01:29:27.877841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:32.153 [2024-09-28 01:29:27.877847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:16:32.153 [2024-09-28 01:29:27.877852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.153 [2024-09-28 01:29:27.887311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.153 [2024-09-28 01:29:27.887336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:32.153 [2024-09-28 01:29:27.887347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.432 ms 00:16:32.153 [2024-09-28 01:29:27.887352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.153 [2024-09-28 01:29:27.887627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.153 [2024-09-28 01:29:27.887639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:32.153 [2024-09-28 01:29:27.887646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:16:32.153 [2024-09-28 01:29:27.887652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.153 [2024-09-28 01:29:27.911167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.153 [2024-09-28 01:29:27.911293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:32.153 [2024-09-28 01:29:27.911306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.153 [2024-09-28 01:29:27.911312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.153 [2024-09-28 01:29:27.911373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:27.911380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:32.154 [2024-09-28 01:29:27.911387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:27.911393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:27.911431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:27.911438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:32.154 [2024-09-28 01:29:27.911448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:27.911454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:27.911467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:27.911473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:32.154 [2024-09-28 01:29:27.911479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:27.911485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:27.971249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:27.971290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:32.154 [2024-09-28 01:29:27.971303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:27.971310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:28.019159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:28.019327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:32.154 [2024-09-28 01:29:28.019339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:28.019346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:28.019411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:28.019418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.154 [2024-09-28 01:29:28.019424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:28.019433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:28.019456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:28.019463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:32.154 [2024-09-28 01:29:28.019469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:28.019475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:28.019548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:28.019556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:32.154 [2024-09-28 01:29:28.019562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:28.019567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:28.019597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:28.019604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:32.154 [2024-09-28 01:29:28.019610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:28.019616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:28.019645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:28.019651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:32.154 [2024-09-28 01:29:28.019657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:28.019663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:28.019698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.154 [2024-09-28 01:29:28.019705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:32.154 [2024-09-28 01:29:28.019711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.154 [2024-09-28 01:29:28.019717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.154 [2024-09-28 01:29:28.019826] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 257.265 ms, result 0 00:16:33.090 00:16:33.090 00:16:33.090 01:29:28 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74208 00:16:33.090 01:29:28 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74208 00:16:33.090 01:29:28 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:33.090 01:29:28 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74208 ']' 00:16:33.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.091 01:29:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.091 01:29:28 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.091 01:29:28 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.091 01:29:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.091 01:29:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:33.091 [2024-09-28 01:29:29.020722] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:33.091 [2024-09-28 01:29:29.020852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74208 ] 00:16:33.349 [2024-09-28 01:29:29.167466] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.607 [2024-09-28 01:29:29.321307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.174 01:29:29 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.174 01:29:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:34.174 01:29:29 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:34.174 [2024-09-28 01:29:30.043780] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:34.174 [2024-09-28 01:29:30.043837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:34.433 [2024-09-28 01:29:30.212241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.433 [2024-09-28 01:29:30.212287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:34.433 [2024-09-28 01:29:30.212299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:34.433 [2024-09-28 01:29:30.212308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.433 [2024-09-28 01:29:30.214370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.433 [2024-09-28 01:29:30.214398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:34.433 [2024-09-28 01:29:30.214410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:16:34.433 [2024-09-28 01:29:30.214416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.433 [2024-09-28 01:29:30.214473] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:34.433 [2024-09-28 01:29:30.215277] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:34.433 [2024-09-28 01:29:30.215308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.433 [2024-09-28 01:29:30.215316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:34.433 [2024-09-28 01:29:30.215325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:16:34.433 [2024-09-28 01:29:30.215331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.433 [2024-09-28 01:29:30.216409] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:34.433 [2024-09-28 01:29:30.226058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.433 [2024-09-28 01:29:30.226090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:34.433 [2024-09-28 01:29:30.226099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.652 ms 00:16:34.433 [2024-09-28 01:29:30.226107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.433 [2024-09-28 01:29:30.226180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.433 [2024-09-28 01:29:30.226202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:34.433 [2024-09-28 01:29:30.226210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:34.433 [2024-09-28 01:29:30.226217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.433 [2024-09-28 01:29:30.230529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.433 [2024-09-28 01:29:30.230558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:34.433 [2024-09-28 01:29:30.230566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.266 ms 00:16:34.433 [2024-09-28 01:29:30.230573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.433 [2024-09-28 01:29:30.230652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.433 [2024-09-28 01:29:30.230661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:34.433 [2024-09-28 01:29:30.230667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:34.433 [2024-09-28 01:29:30.230674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.433 [2024-09-28 01:29:30.230695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.433 [2024-09-28 01:29:30.230703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:34.433 [2024-09-28 01:29:30.230709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:34.433 [2024-09-28 01:29:30.230716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.433 [2024-09-28 01:29:30.230735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:34.433 [2024-09-28 01:29:30.233551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.433 [2024-09-28 01:29:30.233662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:34.433 [2024-09-28 01:29:30.233677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:16:34.433 [2024-09-28 01:29:30.233685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.434 [2024-09-28 01:29:30.233716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.434 [2024-09-28 01:29:30.233722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:34.434 [2024-09-28 01:29:30.233729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:34.434 [2024-09-28 01:29:30.233735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.434 [2024-09-28 01:29:30.233752] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:34.434 [2024-09-28 01:29:30.233766] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:34.434 [2024-09-28 01:29:30.233797] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:34.434 [2024-09-28 01:29:30.233810] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:34.434 [2024-09-28 01:29:30.233892] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:34.434 [2024-09-28 01:29:30.233900] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:34.434 [2024-09-28 01:29:30.233910] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:34.434 [2024-09-28 01:29:30.233918] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:34.434 [2024-09-28 01:29:30.233926] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:34.434 [2024-09-28 01:29:30.233932] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:34.434 [2024-09-28 01:29:30.233939] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:34.434 [2024-09-28 01:29:30.233945] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:34.434 [2024-09-28 01:29:30.233953] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:34.434 [2024-09-28 01:29:30.233960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.434 [2024-09-28 01:29:30.233967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:34.434 [2024-09-28 01:29:30.233973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:16:34.434 [2024-09-28 01:29:30.233979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.434 [2024-09-28 01:29:30.234046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.434 [2024-09-28 01:29:30.234053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:34.434 [2024-09-28 01:29:30.234059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:34.434 [2024-09-28 01:29:30.234065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.434 [2024-09-28 01:29:30.234141] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:34.434 [2024-09-28 01:29:30.234151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:34.434 [2024-09-28 01:29:30.234157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.434 [2024-09-28 01:29:30.234164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:34.434 [2024-09-28 01:29:30.234176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:34.434 [2024-09-28 01:29:30.234190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:34.434 [2024-09-28 01:29:30.234208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.434 [2024-09-28 01:29:30.234220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:34.434 [2024-09-28 01:29:30.234226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:34.434 [2024-09-28 01:29:30.234231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.434 [2024-09-28 01:29:30.234247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:34.434 [2024-09-28 01:29:30.234253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:34.434 [2024-09-28 01:29:30.234260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:34.434 [2024-09-28 01:29:30.234272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:34.434 [2024-09-28 01:29:30.234282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:34.434 [2024-09-28 01:29:30.234294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.434 [2024-09-28 01:29:30.234305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:34.434 [2024-09-28 01:29:30.234313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.434 [2024-09-28 01:29:30.234324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:34.434 [2024-09-28 01:29:30.234329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.434 [2024-09-28 01:29:30.234340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:34.434 [2024-09-28 01:29:30.234346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.434 [2024-09-28 01:29:30.234358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:34.434 [2024-09-28 01:29:30.234363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.434 [2024-09-28 01:29:30.234375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:34.434 [2024-09-28 01:29:30.234381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:34.434 [2024-09-28 01:29:30.234385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.434 [2024-09-28 01:29:30.234391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:34.434 [2024-09-28 01:29:30.234396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:34.434 [2024-09-28 01:29:30.234404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:34.434 [2024-09-28 01:29:30.234415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:34.434 [2024-09-28 01:29:30.234420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234426] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:34.434 [2024-09-28 01:29:30.234432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:34.434 [2024-09-28 01:29:30.234439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.434 [2024-09-28 01:29:30.234444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.434 [2024-09-28 01:29:30.234451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:34.434 [2024-09-28 01:29:30.234456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:34.434 [2024-09-28 01:29:30.234463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:34.434 [2024-09-28 01:29:30.234468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:34.434 [2024-09-28 01:29:30.234474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:34.434 [2024-09-28 01:29:30.234480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:34.434 [2024-09-28 01:29:30.234487] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:34.434 [2024-09-28 01:29:30.234494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.434 [2024-09-28 01:29:30.234503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:34.434 [2024-09-28 01:29:30.234508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:34.434 [2024-09-28 01:29:30.234516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:34.434 [2024-09-28 01:29:30.234522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:34.434 [2024-09-28 01:29:30.234528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:34.434 [2024-09-28 01:29:30.234534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:34.434 [2024-09-28 01:29:30.234540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:34.434 [2024-09-28 01:29:30.234545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:34.434 [2024-09-28 01:29:30.234552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:34.434 [2024-09-28 01:29:30.234558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:34.434 [2024-09-28 01:29:30.234564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:34.434 [2024-09-28 01:29:30.234569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:34.434 [2024-09-28 01:29:30.234576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:34.434 [2024-09-28 01:29:30.234581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:34.434 [2024-09-28 01:29:30.234587] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:34.434 [2024-09-28 01:29:30.234594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.434 [2024-09-28 01:29:30.234603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:34.434 [2024-09-28 01:29:30.234609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:34.435 [2024-09-28 01:29:30.234615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:34.435 [2024-09-28 01:29:30.234621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:34.435 [2024-09-28 01:29:30.234628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.234633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:34.435 [2024-09-28 01:29:30.234640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:16:34.435 [2024-09-28 01:29:30.234645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.255443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.255472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:34.435 [2024-09-28 01:29:30.255482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.740 ms 00:16:34.435 [2024-09-28 01:29:30.255488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.255586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.255593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:34.435 [2024-09-28 01:29:30.255600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:34.435 [2024-09-28 01:29:30.255606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.288184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.288224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:34.435 [2024-09-28 01:29:30.288236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.558 ms 00:16:34.435 [2024-09-28 01:29:30.288242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.288306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.288315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:34.435 [2024-09-28 01:29:30.288323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:34.435 [2024-09-28 01:29:30.288331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.288615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.288626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:34.435 [2024-09-28 01:29:30.288635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:16:34.435 [2024-09-28 01:29:30.288640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.288741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.288748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:34.435 [2024-09-28 01:29:30.288756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:34.435 [2024-09-28 01:29:30.288762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.302595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.302630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:34.435 [2024-09-28 01:29:30.302646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.811 ms 00:16:34.435 [2024-09-28 01:29:30.302659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.313153] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:34.435 [2024-09-28 01:29:30.313181] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:34.435 [2024-09-28 01:29:30.313202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.313209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:34.435 [2024-09-28 01:29:30.313218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.415 ms 00:16:34.435 [2024-09-28 01:29:30.313223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.332302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.332331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:34.435 [2024-09-28 01:29:30.332341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.019 ms 00:16:34.435 [2024-09-28 01:29:30.332352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.341214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.341332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:34.435 [2024-09-28 01:29:30.341350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.803 ms 00:16:34.435 [2024-09-28 01:29:30.341356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.350012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.350039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:34.435 [2024-09-28 01:29:30.350048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.616 ms 00:16:34.435 [2024-09-28 01:29:30.350053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.435 [2024-09-28 01:29:30.350534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.435 [2024-09-28 01:29:30.350547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:34.435 [2024-09-28 01:29:30.350556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:16:34.435 [2024-09-28 01:29:30.350564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.694 [2024-09-28 01:29:30.394491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.694 [2024-09-28 01:29:30.394528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:34.694 [2024-09-28 01:29:30.394540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.908 ms 00:16:34.694 [2024-09-28 01:29:30.394548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.694 [2024-09-28 01:29:30.402467] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:34.694 [2024-09-28 01:29:30.414405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.694 [2024-09-28 01:29:30.414444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:34.694 [2024-09-28 01:29:30.414454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.777 ms 00:16:34.694 [2024-09-28 01:29:30.414462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.694 [2024-09-28 01:29:30.414548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.694 [2024-09-28 01:29:30.414557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:34.694 [2024-09-28 01:29:30.414563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:34.694 [2024-09-28 01:29:30.414572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.694 [2024-09-28 01:29:30.414612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.694 [2024-09-28 01:29:30.414619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:34.694 [2024-09-28 01:29:30.414626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:34.694 [2024-09-28 01:29:30.414635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.694 [2024-09-28 01:29:30.414654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.694 [2024-09-28 01:29:30.414664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:34.694 [2024-09-28 01:29:30.414670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:34.694 [2024-09-28 01:29:30.414677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.694 [2024-09-28 01:29:30.414703] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:34.694 [2024-09-28 01:29:30.414713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.694 [2024-09-28 01:29:30.414719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:34.694 [2024-09-28 01:29:30.414726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:34.694 [2024-09-28 01:29:30.414731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.694 [2024-09-28 01:29:30.433102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.694 [2024-09-28 01:29:30.433235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:34.694 [2024-09-28 01:29:30.433253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.349 ms 00:16:34.694 [2024-09-28 01:29:30.433261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.694 [2024-09-28 01:29:30.433337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.694 [2024-09-28 01:29:30.433346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:34.694 [2024-09-28 01:29:30.433354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:34.694 [2024-09-28 01:29:30.433360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.694 [2024-09-28 01:29:30.434014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:34.694 [2024-09-28 01:29:30.436351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.537 ms, result 0 00:16:34.694 [2024-09-28 01:29:30.437501] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:34.694 Some configs were skipped because the RPC state that can call them passed over. 00:16:34.694 01:29:30 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:34.952 [2024-09-28 01:29:30.661910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.952 [2024-09-28 01:29:30.662040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:34.952 [2024-09-28 01:29:30.662317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:16:34.952 [2024-09-28 01:29:30.662391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.952 [2024-09-28 01:29:30.662453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.078 ms, result 0 00:16:34.952 true 00:16:34.952 01:29:30 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:34.952 [2024-09-28 01:29:30.821759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.952 [2024-09-28 01:29:30.821912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:34.952 [2024-09-28 01:29:30.821960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:16:34.952 [2024-09-28 01:29:30.821979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.952 [2024-09-28 01:29:30.822023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.527 ms, result 0 00:16:34.952 true 00:16:34.952 01:29:30 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74208 00:16:34.952 01:29:30 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74208 ']' 00:16:34.952 01:29:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74208 00:16:34.952 01:29:30 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:34.952 01:29:30 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.952 01:29:30 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74208 00:16:34.952 killing process with pid 74208 00:16:34.952 01:29:30 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.952 01:29:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.952 01:29:30 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74208' 00:16:34.952 01:29:30 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74208 00:16:34.952 01:29:30 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74208 00:16:35.519 [2024-09-28 01:29:31.403476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.519 [2024-09-28 01:29:31.403518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:35.519 [2024-09-28 01:29:31.403528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:35.519 [2024-09-28 01:29:31.403537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.519 [2024-09-28 01:29:31.403554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:35.519 [2024-09-28 01:29:31.405689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.519 [2024-09-28 01:29:31.405713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:35.519 [2024-09-28 01:29:31.405724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:16:35.519 [2024-09-28 01:29:31.405731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.519 [2024-09-28 01:29:31.405957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.519 [2024-09-28 01:29:31.405966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:35.519 [2024-09-28 01:29:31.405973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:16:35.519 [2024-09-28 01:29:31.405980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.519 [2024-09-28 01:29:31.410699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.519 [2024-09-28 01:29:31.410725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:35.519 [2024-09-28 01:29:31.410734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:16:35.519 [2024-09-28 01:29:31.410740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.519 [2024-09-28 01:29:31.416102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.519 [2024-09-28 01:29:31.416123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:35.519 [2024-09-28 01:29:31.416136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.331 ms 00:16:35.519 [2024-09-28 01:29:31.416142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.519 [2024-09-28 01:29:31.423626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.519 [2024-09-28 01:29:31.423650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:35.519 [2024-09-28 01:29:31.423661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.442 ms 00:16:35.519 [2024-09-28 01:29:31.423667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.519 [2024-09-28 01:29:31.429807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.519 [2024-09-28 01:29:31.429917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:35.519 [2024-09-28 01:29:31.429933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.108 ms 00:16:35.519 [2024-09-28 01:29:31.429945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.519 [2024-09-28 01:29:31.430057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.519 [2024-09-28 01:29:31.430066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:35.519 [2024-09-28 01:29:31.430075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:35.519 [2024-09-28 01:29:31.430083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.519 [2024-09-28 01:29:31.437957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.519 [2024-09-28 01:29:31.437981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:35.519 [2024-09-28 01:29:31.437989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.856 ms 00:16:35.519 [2024-09-28 01:29:31.437995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.519 [2024-09-28 01:29:31.445608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.519 [2024-09-28 01:29:31.445631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:35.519 [2024-09-28 01:29:31.445643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.582 ms 00:16:35.519 [2024-09-28 01:29:31.445649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.779 [2024-09-28 01:29:31.452613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.779 [2024-09-28 01:29:31.452707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:35.779 [2024-09-28 01:29:31.452722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.934 ms 00:16:35.779 [2024-09-28 01:29:31.452728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.779 [2024-09-28 01:29:31.459999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.779 [2024-09-28 01:29:31.460092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:35.779 [2024-09-28 01:29:31.460107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.220 ms 00:16:35.779 [2024-09-28 01:29:31.460112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.779 [2024-09-28 01:29:31.460146] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:35.779 [2024-09-28 01:29:31.460158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:35.779 [2024-09-28 01:29:31.460488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:35.780 [2024-09-28 01:29:31.460841] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:35.780 [2024-09-28 01:29:31.460850] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94572d0-4efc-4bf8-b84f-0a6faae4c084 00:16:35.780 [2024-09-28 01:29:31.460856] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:35.780 [2024-09-28 01:29:31.460862] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:35.780 [2024-09-28 01:29:31.460867] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:35.780 [2024-09-28 01:29:31.460875] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:35.780 [2024-09-28 01:29:31.460884] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:35.780 [2024-09-28 01:29:31.460893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:35.780 [2024-09-28 01:29:31.460898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:35.780 [2024-09-28 01:29:31.460904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:35.780 [2024-09-28 01:29:31.460909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:35.780 [2024-09-28 01:29:31.460915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.780 [2024-09-28 01:29:31.460921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:35.780 [2024-09-28 01:29:31.460929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:16:35.780 [2024-09-28 01:29:31.460934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.780 [2024-09-28 01:29:31.470225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.780 [2024-09-28 01:29:31.470319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:35.780 [2024-09-28 01:29:31.470334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.274 ms 00:16:35.780 [2024-09-28 01:29:31.470342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.780 [2024-09-28 01:29:31.470629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.780 [2024-09-28 01:29:31.470642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:35.780 [2024-09-28 01:29:31.470650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:16:35.780 [2024-09-28 01:29:31.470656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.780 [2024-09-28 01:29:31.501510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.780 [2024-09-28 01:29:31.501640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.780 [2024-09-28 01:29:31.501657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.780 [2024-09-28 01:29:31.501663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.780 [2024-09-28 01:29:31.501754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.780 [2024-09-28 01:29:31.501762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.780 [2024-09-28 01:29:31.501769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.780 [2024-09-28 01:29:31.501775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.780 [2024-09-28 01:29:31.501809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.780 [2024-09-28 01:29:31.501816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.780 [2024-09-28 01:29:31.501825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.780 [2024-09-28 01:29:31.501833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.780 [2024-09-28 01:29:31.501849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.780 [2024-09-28 01:29:31.501854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.780 [2024-09-28 01:29:31.501861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.780 [2024-09-28 01:29:31.501867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.780 [2024-09-28 01:29:31.560517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.780 [2024-09-28 01:29:31.560684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.780 [2024-09-28 01:29:31.560701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.780 [2024-09-28 01:29:31.560710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.780 [2024-09-28 01:29:31.608187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.780 [2024-09-28 01:29:31.608237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:35.781 [2024-09-28 01:29:31.608248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.781 [2024-09-28 01:29:31.608254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.781 [2024-09-28 01:29:31.608330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.781 [2024-09-28 01:29:31.608338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.781 [2024-09-28 01:29:31.608348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.781 [2024-09-28 01:29:31.608353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.781 [2024-09-28 01:29:31.608381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.781 [2024-09-28 01:29:31.608387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.781 [2024-09-28 01:29:31.608394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.781 [2024-09-28 01:29:31.608400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.781 [2024-09-28 01:29:31.608470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.781 [2024-09-28 01:29:31.608478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.781 [2024-09-28 01:29:31.608485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.781 [2024-09-28 01:29:31.608491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.781 [2024-09-28 01:29:31.608517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.781 [2024-09-28 01:29:31.608525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:35.781 [2024-09-28 01:29:31.608532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.781 [2024-09-28 01:29:31.608538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.781 [2024-09-28 01:29:31.608568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.781 [2024-09-28 01:29:31.608575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.781 [2024-09-28 01:29:31.608584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.781 [2024-09-28 01:29:31.608589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.781 [2024-09-28 01:29:31.608627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.781 [2024-09-28 01:29:31.608634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.781 [2024-09-28 01:29:31.608642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.781 [2024-09-28 01:29:31.608647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.781 [2024-09-28 01:29:31.608751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 205.258 ms, result 0 00:16:36.348 01:29:32 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:36.348 01:29:32 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:36.606 [2024-09-28 01:29:32.289783] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:36.607 [2024-09-28 01:29:32.289906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74255 ] 00:16:36.607 [2024-09-28 01:29:32.438001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.865 [2024-09-28 01:29:32.585947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.865 [2024-09-28 01:29:32.794135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:36.865 [2024-09-28 01:29:32.794183] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:37.125 [2024-09-28 01:29:32.946343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.946393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:37.125 [2024-09-28 01:29:32.946406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:37.125 [2024-09-28 01:29:32.946413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.948480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.948601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.125 [2024-09-28 01:29:32.948615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:16:37.125 [2024-09-28 01:29:32.948624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.948682] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:37.125 [2024-09-28 01:29:32.949213] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:37.125 [2024-09-28 01:29:32.949230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.949239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.125 [2024-09-28 01:29:32.949247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:16:37.125 [2024-09-28 01:29:32.949252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.950226] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:37.125 [2024-09-28 01:29:32.959911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.960055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:37.125 [2024-09-28 01:29:32.960069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.686 ms 00:16:37.125 [2024-09-28 01:29:32.960076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.960149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.960158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:37.125 [2024-09-28 01:29:32.960167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:37.125 [2024-09-28 01:29:32.960173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.964504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.964528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.125 [2024-09-28 01:29:32.964536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.283 ms 00:16:37.125 [2024-09-28 01:29:32.964542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.964615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.964624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.125 [2024-09-28 01:29:32.964631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:37.125 [2024-09-28 01:29:32.964637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.964655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.964662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:37.125 [2024-09-28 01:29:32.964669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:37.125 [2024-09-28 01:29:32.964674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.964693] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:37.125 [2024-09-28 01:29:32.967385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.967408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.125 [2024-09-28 01:29:32.967415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:16:37.125 [2024-09-28 01:29:32.967421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.967451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.967460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:37.125 [2024-09-28 01:29:32.967467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:37.125 [2024-09-28 01:29:32.967472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.967486] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:37.125 [2024-09-28 01:29:32.967500] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:37.125 [2024-09-28 01:29:32.967528] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:37.125 [2024-09-28 01:29:32.967540] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:37.125 [2024-09-28 01:29:32.967621] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:37.125 [2024-09-28 01:29:32.967629] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:37.125 [2024-09-28 01:29:32.967637] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:37.125 [2024-09-28 01:29:32.967645] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:37.125 [2024-09-28 01:29:32.967652] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:37.125 [2024-09-28 01:29:32.967658] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:37.125 [2024-09-28 01:29:32.967664] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:37.125 [2024-09-28 01:29:32.967670] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:37.125 [2024-09-28 01:29:32.967675] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:37.125 [2024-09-28 01:29:32.967681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.967689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:37.125 [2024-09-28 01:29:32.967695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:16:37.125 [2024-09-28 01:29:32.967701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.967767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.125 [2024-09-28 01:29:32.967773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:37.125 [2024-09-28 01:29:32.967780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:37.125 [2024-09-28 01:29:32.967786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.125 [2024-09-28 01:29:32.967859] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:37.125 [2024-09-28 01:29:32.967866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:37.125 [2024-09-28 01:29:32.967874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:37.125 [2024-09-28 01:29:32.967880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.125 [2024-09-28 01:29:32.967886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:37.125 [2024-09-28 01:29:32.967891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:37.125 [2024-09-28 01:29:32.967896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:37.125 [2024-09-28 01:29:32.967901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:37.125 [2024-09-28 01:29:32.967907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:37.125 [2024-09-28 01:29:32.967912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:37.125 [2024-09-28 01:29:32.967917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:37.125 [2024-09-28 01:29:32.967928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:37.125 [2024-09-28 01:29:32.967933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:37.125 [2024-09-28 01:29:32.967939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:37.125 [2024-09-28 01:29:32.967944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:37.125 [2024-09-28 01:29:32.967949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.125 [2024-09-28 01:29:32.967954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:37.125 [2024-09-28 01:29:32.967959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:37.125 [2024-09-28 01:29:32.967964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.125 [2024-09-28 01:29:32.967969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:37.125 [2024-09-28 01:29:32.967974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:37.125 [2024-09-28 01:29:32.967979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.125 [2024-09-28 01:29:32.967984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:37.125 [2024-09-28 01:29:32.967989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:37.125 [2024-09-28 01:29:32.967993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.125 [2024-09-28 01:29:32.967999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:37.125 [2024-09-28 01:29:32.968004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:37.125 [2024-09-28 01:29:32.968010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.125 [2024-09-28 01:29:32.968015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:37.125 [2024-09-28 01:29:32.968020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:37.125 [2024-09-28 01:29:32.968025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:37.125 [2024-09-28 01:29:32.968031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:37.125 [2024-09-28 01:29:32.968036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:37.125 [2024-09-28 01:29:32.968041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:37.125 [2024-09-28 01:29:32.968046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:37.125 [2024-09-28 01:29:32.968051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:37.125 [2024-09-28 01:29:32.968057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:37.125 [2024-09-28 01:29:32.968062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:37.125 [2024-09-28 01:29:32.968067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:37.126 [2024-09-28 01:29:32.968072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.126 [2024-09-28 01:29:32.968077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:37.126 [2024-09-28 01:29:32.968082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:37.126 [2024-09-28 01:29:32.968087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.126 [2024-09-28 01:29:32.968092] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:37.126 [2024-09-28 01:29:32.968098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:37.126 [2024-09-28 01:29:32.968104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:37.126 [2024-09-28 01:29:32.968109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.126 [2024-09-28 01:29:32.968115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:37.126 [2024-09-28 01:29:32.968120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:37.126 [2024-09-28 01:29:32.968125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:37.126 [2024-09-28 01:29:32.968130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:37.126 [2024-09-28 01:29:32.968135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:37.126 [2024-09-28 01:29:32.968140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:37.126 [2024-09-28 01:29:32.968146] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:37.126 [2024-09-28 01:29:32.968155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:37.126 [2024-09-28 01:29:32.968162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:37.126 [2024-09-28 01:29:32.968167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:37.126 [2024-09-28 01:29:32.968172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:37.126 [2024-09-28 01:29:32.968177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:37.126 [2024-09-28 01:29:32.968184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:37.126 [2024-09-28 01:29:32.968190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:37.126 [2024-09-28 01:29:32.968211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:37.126 [2024-09-28 01:29:32.968216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:37.126 [2024-09-28 01:29:32.968222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:37.126 [2024-09-28 01:29:32.968228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:37.126 [2024-09-28 01:29:32.968234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:37.126 [2024-09-28 01:29:32.968240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:37.126 [2024-09-28 01:29:32.968245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:37.126 [2024-09-28 01:29:32.968251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:37.126 [2024-09-28 01:29:32.968257] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:37.126 [2024-09-28 01:29:32.968263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:37.126 [2024-09-28 01:29:32.968270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:37.126 [2024-09-28 01:29:32.968278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:37.126 [2024-09-28 01:29:32.968283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:37.126 [2024-09-28 01:29:32.968289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:37.126 [2024-09-28 01:29:32.968294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-09-28 01:29:32.968302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:37.126 [2024-09-28 01:29:32.968308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:16:37.126 [2024-09-28 01:29:32.968314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-09-28 01:29:33.005071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-09-28 01:29:33.005264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.126 [2024-09-28 01:29:33.005280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.717 ms 00:16:37.126 [2024-09-28 01:29:33.005287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-09-28 01:29:33.005417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-09-28 01:29:33.005427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:37.126 [2024-09-28 01:29:33.005434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:37.126 [2024-09-28 01:29:33.005440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-09-28 01:29:33.029181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-09-28 01:29:33.029218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.126 [2024-09-28 01:29:33.029228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.724 ms 00:16:37.126 [2024-09-28 01:29:33.029234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-09-28 01:29:33.029299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-09-28 01:29:33.029306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.126 [2024-09-28 01:29:33.029313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:37.126 [2024-09-28 01:29:33.029319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-09-28 01:29:33.029607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-09-28 01:29:33.029619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.126 [2024-09-28 01:29:33.029626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:16:37.126 [2024-09-28 01:29:33.029633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-09-28 01:29:33.029736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-09-28 01:29:33.029744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.126 [2024-09-28 01:29:33.029751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:37.126 [2024-09-28 01:29:33.029756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-09-28 01:29:33.039902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-09-28 01:29:33.039928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.126 [2024-09-28 01:29:33.039937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.130 ms 00:16:37.126 [2024-09-28 01:29:33.039942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.126 [2024-09-28 01:29:33.049870] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:37.126 [2024-09-28 01:29:33.049898] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:37.126 [2024-09-28 01:29:33.049907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.126 [2024-09-28 01:29:33.049914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:37.126 [2024-09-28 01:29:33.049921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.866 ms 00:16:37.126 [2024-09-28 01:29:33.049927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.068489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.068600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:37.385 [2024-09-28 01:29:33.068617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.511 ms 00:16:37.385 [2024-09-28 01:29:33.068623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.077691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.077717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:37.385 [2024-09-28 01:29:33.077725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.013 ms 00:16:37.385 [2024-09-28 01:29:33.077731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.086252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.086276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:37.385 [2024-09-28 01:29:33.086283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.478 ms 00:16:37.385 [2024-09-28 01:29:33.086289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.086750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.086764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:37.385 [2024-09-28 01:29:33.086772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:16:37.385 [2024-09-28 01:29:33.086777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.129942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.129989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:37.385 [2024-09-28 01:29:33.129999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.147 ms 00:16:37.385 [2024-09-28 01:29:33.130006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.137825] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:37.385 [2024-09-28 01:29:33.149605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.149639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:37.385 [2024-09-28 01:29:33.149649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.513 ms 00:16:37.385 [2024-09-28 01:29:33.149655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.149745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.149754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:37.385 [2024-09-28 01:29:33.149761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:37.385 [2024-09-28 01:29:33.149767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.149809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.149819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:37.385 [2024-09-28 01:29:33.149826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:37.385 [2024-09-28 01:29:33.149832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.149848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.149855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:37.385 [2024-09-28 01:29:33.149861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:37.385 [2024-09-28 01:29:33.149868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.149894] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:37.385 [2024-09-28 01:29:33.149902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.149909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:37.385 [2024-09-28 01:29:33.149916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:37.385 [2024-09-28 01:29:33.149922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.167885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.168013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:37.385 [2024-09-28 01:29:33.168028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.948 ms 00:16:37.385 [2024-09-28 01:29:33.168034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.168107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.385 [2024-09-28 01:29:33.168114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:37.385 [2024-09-28 01:29:33.168121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:37.385 [2024-09-28 01:29:33.168127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.385 [2024-09-28 01:29:33.168834] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:37.385 [2024-09-28 01:29:33.171065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.244 ms, result 0 00:16:37.385 [2024-09-28 01:29:33.171700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:37.385 [2024-09-28 01:29:33.186367] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:43.132  Copying: 46/256 [MB] (46 MBps) Copying: 92/256 [MB] (46 MBps) Copying: 140/256 [MB] (47 MBps) Copying: 184/256 [MB] (44 MBps) Copying: 228/256 [MB] (44 MBps) Copying: 256/256 [MB] (average 45 MBps)[2024-09-28 01:29:38.828495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.132 [2024-09-28 01:29:38.837803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.837841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.132 [2024-09-28 01:29:38.837855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:43.132 [2024-09-28 01:29:38.837864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.132 [2024-09-28 01:29:38.837885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:43.132 [2024-09-28 01:29:38.840431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.840459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.132 [2024-09-28 01:29:38.840469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.533 ms 00:16:43.132 [2024-09-28 01:29:38.840478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.132 [2024-09-28 01:29:38.840731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.840747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.132 [2024-09-28 01:29:38.840755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:16:43.132 [2024-09-28 01:29:38.840763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.132 [2024-09-28 01:29:38.844450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.844481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:43.132 [2024-09-28 01:29:38.844490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.673 ms 00:16:43.132 [2024-09-28 01:29:38.844497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.132 [2024-09-28 01:29:38.851548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.851698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:43.132 [2024-09-28 01:29:38.851719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.035 ms 00:16:43.132 [2024-09-28 01:29:38.851726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.132 [2024-09-28 01:29:38.874516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.874641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:43.132 [2024-09-28 01:29:38.874658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.712 ms 00:16:43.132 [2024-09-28 01:29:38.874665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.132 [2024-09-28 01:29:38.888377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.888408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:43.132 [2024-09-28 01:29:38.888420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.690 ms 00:16:43.132 [2024-09-28 01:29:38.888427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.132 [2024-09-28 01:29:38.888557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.888567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:43.132 [2024-09-28 01:29:38.888576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:43.132 [2024-09-28 01:29:38.888584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.132 [2024-09-28 01:29:38.911395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.911424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:43.132 [2024-09-28 01:29:38.911434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.790 ms 00:16:43.132 [2024-09-28 01:29:38.911442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.132 [2024-09-28 01:29:38.933829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.933859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:43.132 [2024-09-28 01:29:38.933868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.367 ms 00:16:43.132 [2024-09-28 01:29:38.933875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.132 [2024-09-28 01:29:38.956097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.132 [2024-09-28 01:29:38.956131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:43.133 [2024-09-28 01:29:38.956140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.198 ms 00:16:43.133 [2024-09-28 01:29:38.956148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.133 [2024-09-28 01:29:38.978110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.133 [2024-09-28 01:29:38.978141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:43.133 [2024-09-28 01:29:38.978152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.876 ms 00:16:43.133 [2024-09-28 01:29:38.978159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.133 [2024-09-28 01:29:38.978179] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:43.133 [2024-09-28 01:29:38.978203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:43.133 [2024-09-28 01:29:38.978718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:43.134 [2024-09-28 01:29:38.978968] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:43.134 [2024-09-28 01:29:38.978977] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94572d0-4efc-4bf8-b84f-0a6faae4c084 00:16:43.134 [2024-09-28 01:29:38.978985] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:43.134 [2024-09-28 01:29:38.978992] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:43.134 [2024-09-28 01:29:38.979003] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:43.134 [2024-09-28 01:29:38.979013] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:43.134 [2024-09-28 01:29:38.979020] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:43.134 [2024-09-28 01:29:38.979028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:43.134 [2024-09-28 01:29:38.979035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:43.134 [2024-09-28 01:29:38.979042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:43.134 [2024-09-28 01:29:38.979048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:43.134 [2024-09-28 01:29:38.979055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.134 [2024-09-28 01:29:38.979062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:43.134 [2024-09-28 01:29:38.979070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.876 ms 00:16:43.134 [2024-09-28 01:29:38.979078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.134 [2024-09-28 01:29:38.991141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.134 [2024-09-28 01:29:38.991174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:43.134 [2024-09-28 01:29:38.991184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.047 ms 00:16:43.134 [2024-09-28 01:29:38.991203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.134 [2024-09-28 01:29:38.991563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.134 [2024-09-28 01:29:38.991580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:43.134 [2024-09-28 01:29:38.991588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:16:43.134 [2024-09-28 01:29:38.991596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.134 [2024-09-28 01:29:39.021392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.134 [2024-09-28 01:29:39.021425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.134 [2024-09-28 01:29:39.021435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.134 [2024-09-28 01:29:39.021442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.134 [2024-09-28 01:29:39.021510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.134 [2024-09-28 01:29:39.021519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.134 [2024-09-28 01:29:39.021526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.134 [2024-09-28 01:29:39.021533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.134 [2024-09-28 01:29:39.021571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.134 [2024-09-28 01:29:39.021584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.134 [2024-09-28 01:29:39.021591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.134 [2024-09-28 01:29:39.021599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.134 [2024-09-28 01:29:39.021616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.134 [2024-09-28 01:29:39.021624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.134 [2024-09-28 01:29:39.021631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.134 [2024-09-28 01:29:39.021638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.393 [2024-09-28 01:29:39.097811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.393 [2024-09-28 01:29:39.098015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.393 [2024-09-28 01:29:39.098032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.393 [2024-09-28 01:29:39.098040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.393 [2024-09-28 01:29:39.160527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.393 [2024-09-28 01:29:39.160725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.393 [2024-09-28 01:29:39.160741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.393 [2024-09-28 01:29:39.160748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.393 [2024-09-28 01:29:39.160805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.393 [2024-09-28 01:29:39.160815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.393 [2024-09-28 01:29:39.160827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.393 [2024-09-28 01:29:39.160835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.393 [2024-09-28 01:29:39.160863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.393 [2024-09-28 01:29:39.160871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.393 [2024-09-28 01:29:39.160878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.393 [2024-09-28 01:29:39.160885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.393 [2024-09-28 01:29:39.160976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.393 [2024-09-28 01:29:39.160986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.393 [2024-09-28 01:29:39.160994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.393 [2024-09-28 01:29:39.161004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.393 [2024-09-28 01:29:39.161033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.393 [2024-09-28 01:29:39.161041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:43.393 [2024-09-28 01:29:39.161049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.393 [2024-09-28 01:29:39.161056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.393 [2024-09-28 01:29:39.161089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.393 [2024-09-28 01:29:39.161098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.393 [2024-09-28 01:29:39.161105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.393 [2024-09-28 01:29:39.161115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.393 [2024-09-28 01:29:39.161154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.393 [2024-09-28 01:29:39.161163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.393 [2024-09-28 01:29:39.161171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.393 [2024-09-28 01:29:39.161178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.393 [2024-09-28 01:29:39.161332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 323.519 ms, result 0 00:16:44.328 00:16:44.328 00:16:44.328 01:29:39 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:44.328 01:29:39 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:44.715 01:29:40 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:44.715 [2024-09-28 01:29:40.551484] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:44.715 [2024-09-28 01:29:40.551732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74344 ] 00:16:44.973 [2024-09-28 01:29:40.701228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.973 [2024-09-28 01:29:40.875732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.231 [2024-09-28 01:29:41.123790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.231 [2024-09-28 01:29:41.123853] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:45.491 [2024-09-28 01:29:41.278107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.278336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:45.491 [2024-09-28 01:29:41.278358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:45.491 [2024-09-28 01:29:41.278367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.281051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.281084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:45.491 [2024-09-28 01:29:41.281094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.658 ms 00:16:45.491 [2024-09-28 01:29:41.281104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.281181] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:45.491 [2024-09-28 01:29:41.281858] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:45.491 [2024-09-28 01:29:41.281877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.281887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:45.491 [2024-09-28 01:29:41.281896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:16:45.491 [2024-09-28 01:29:41.281903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.282941] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:45.491 [2024-09-28 01:29:41.295101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.295134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:45.491 [2024-09-28 01:29:41.295145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.161 ms 00:16:45.491 [2024-09-28 01:29:41.295153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.295248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.295260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:45.491 [2024-09-28 01:29:41.295271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:45.491 [2024-09-28 01:29:41.295278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.299775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.299804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:45.491 [2024-09-28 01:29:41.299813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.457 ms 00:16:45.491 [2024-09-28 01:29:41.299821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.299907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.299919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:45.491 [2024-09-28 01:29:41.299927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:45.491 [2024-09-28 01:29:41.299934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.299957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.299966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:45.491 [2024-09-28 01:29:41.299974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:45.491 [2024-09-28 01:29:41.299981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.300000] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:45.491 [2024-09-28 01:29:41.303142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.303168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:45.491 [2024-09-28 01:29:41.303177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.147 ms 00:16:45.491 [2024-09-28 01:29:41.303184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.303230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.303243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:45.491 [2024-09-28 01:29:41.303251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:45.491 [2024-09-28 01:29:41.303258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.303276] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:45.491 [2024-09-28 01:29:41.303292] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:45.491 [2024-09-28 01:29:41.303326] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:45.491 [2024-09-28 01:29:41.303341] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:45.491 [2024-09-28 01:29:41.303444] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:45.491 [2024-09-28 01:29:41.303491] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:45.491 [2024-09-28 01:29:41.303501] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:45.491 [2024-09-28 01:29:41.303510] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:45.491 [2024-09-28 01:29:41.303519] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:45.491 [2024-09-28 01:29:41.303527] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:45.491 [2024-09-28 01:29:41.303534] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:45.491 [2024-09-28 01:29:41.303541] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:45.491 [2024-09-28 01:29:41.303548] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:45.491 [2024-09-28 01:29:41.303556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.303566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:45.491 [2024-09-28 01:29:41.303573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:16:45.491 [2024-09-28 01:29:41.303580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.303667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.491 [2024-09-28 01:29:41.303675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:45.491 [2024-09-28 01:29:41.303682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:45.491 [2024-09-28 01:29:41.303689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.491 [2024-09-28 01:29:41.303785] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:45.491 [2024-09-28 01:29:41.303794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:45.491 [2024-09-28 01:29:41.303804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.491 [2024-09-28 01:29:41.303812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.491 [2024-09-28 01:29:41.303819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:45.491 [2024-09-28 01:29:41.303826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:45.491 [2024-09-28 01:29:41.303833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:45.491 [2024-09-28 01:29:41.303839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:45.491 [2024-09-28 01:29:41.303846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:45.491 [2024-09-28 01:29:41.303853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.491 [2024-09-28 01:29:41.303859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:45.491 [2024-09-28 01:29:41.303873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:45.491 [2024-09-28 01:29:41.303880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:45.491 [2024-09-28 01:29:41.303887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:45.491 [2024-09-28 01:29:41.303894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:45.491 [2024-09-28 01:29:41.303900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.491 [2024-09-28 01:29:41.303907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:45.491 [2024-09-28 01:29:41.303913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:45.491 [2024-09-28 01:29:41.303919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.491 [2024-09-28 01:29:41.303926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:45.491 [2024-09-28 01:29:41.303933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:45.491 [2024-09-28 01:29:41.303940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.491 [2024-09-28 01:29:41.303947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:45.491 [2024-09-28 01:29:41.303953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:45.491 [2024-09-28 01:29:41.303961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.491 [2024-09-28 01:29:41.303967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:45.491 [2024-09-28 01:29:41.303974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:45.491 [2024-09-28 01:29:41.303980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.491 [2024-09-28 01:29:41.303987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:45.491 [2024-09-28 01:29:41.303994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:45.491 [2024-09-28 01:29:41.304000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:45.491 [2024-09-28 01:29:41.304007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:45.491 [2024-09-28 01:29:41.304013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:45.491 [2024-09-28 01:29:41.304019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.491 [2024-09-28 01:29:41.304026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:45.491 [2024-09-28 01:29:41.304032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:45.491 [2024-09-28 01:29:41.304039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:45.491 [2024-09-28 01:29:41.304046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:45.491 [2024-09-28 01:29:41.304053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:45.491 [2024-09-28 01:29:41.304059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.491 [2024-09-28 01:29:41.304065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:45.491 [2024-09-28 01:29:41.304072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:45.491 [2024-09-28 01:29:41.304078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.491 [2024-09-28 01:29:41.304086] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:45.491 [2024-09-28 01:29:41.304094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:45.491 [2024-09-28 01:29:41.304101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:45.491 [2024-09-28 01:29:41.304108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:45.491 [2024-09-28 01:29:41.304115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:45.491 [2024-09-28 01:29:41.304122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:45.491 [2024-09-28 01:29:41.304128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:45.491 [2024-09-28 01:29:41.304135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:45.491 [2024-09-28 01:29:41.304142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:45.491 [2024-09-28 01:29:41.304148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:45.491 [2024-09-28 01:29:41.304156] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:45.491 [2024-09-28 01:29:41.304167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.491 [2024-09-28 01:29:41.304176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:45.491 [2024-09-28 01:29:41.304184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:45.491 [2024-09-28 01:29:41.304201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:45.491 [2024-09-28 01:29:41.304209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:45.491 [2024-09-28 01:29:41.304216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:45.491 [2024-09-28 01:29:41.304223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:45.491 [2024-09-28 01:29:41.304230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:45.491 [2024-09-28 01:29:41.304237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:45.491 [2024-09-28 01:29:41.304244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:45.491 [2024-09-28 01:29:41.304251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:45.491 [2024-09-28 01:29:41.304258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:45.491 [2024-09-28 01:29:41.304265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:45.491 [2024-09-28 01:29:41.304272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:45.492 [2024-09-28 01:29:41.304280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:45.492 [2024-09-28 01:29:41.304287] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:45.492 [2024-09-28 01:29:41.304295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:45.492 [2024-09-28 01:29:41.304303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:45.492 [2024-09-28 01:29:41.304311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:45.492 [2024-09-28 01:29:41.304318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:45.492 [2024-09-28 01:29:41.304326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:45.492 [2024-09-28 01:29:41.304333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.492 [2024-09-28 01:29:41.304343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:45.492 [2024-09-28 01:29:41.304351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:16:45.492 [2024-09-28 01:29:41.304358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.492 [2024-09-28 01:29:41.347796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.492 [2024-09-28 01:29:41.347840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:45.492 [2024-09-28 01:29:41.347852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.376 ms 00:16:45.492 [2024-09-28 01:29:41.347860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.492 [2024-09-28 01:29:41.347992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.492 [2024-09-28 01:29:41.348004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:45.492 [2024-09-28 01:29:41.348013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:45.492 [2024-09-28 01:29:41.348020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.492 [2024-09-28 01:29:41.377727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.492 [2024-09-28 01:29:41.377757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:45.492 [2024-09-28 01:29:41.377767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.687 ms 00:16:45.492 [2024-09-28 01:29:41.377774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.492 [2024-09-28 01:29:41.377851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.492 [2024-09-28 01:29:41.377862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:45.492 [2024-09-28 01:29:41.377870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:45.492 [2024-09-28 01:29:41.377877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.492 [2024-09-28 01:29:41.378173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.492 [2024-09-28 01:29:41.378203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:45.492 [2024-09-28 01:29:41.378213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:45.492 [2024-09-28 01:29:41.378220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.492 [2024-09-28 01:29:41.378343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.492 [2024-09-28 01:29:41.378352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:45.492 [2024-09-28 01:29:41.378359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:45.492 [2024-09-28 01:29:41.378366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.492 [2024-09-28 01:29:41.390815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.492 [2024-09-28 01:29:41.390947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:45.492 [2024-09-28 01:29:41.390962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.429 ms 00:16:45.492 [2024-09-28 01:29:41.390970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.492 [2024-09-28 01:29:41.403310] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:45.492 [2024-09-28 01:29:41.403343] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:45.492 [2024-09-28 01:29:41.403353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.492 [2024-09-28 01:29:41.403361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:45.492 [2024-09-28 01:29:41.403369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.286 ms 00:16:45.492 [2024-09-28 01:29:41.403376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.427729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.750 [2024-09-28 01:29:41.427762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:45.750 [2024-09-28 01:29:41.427778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.286 ms 00:16:45.750 [2024-09-28 01:29:41.427786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.439147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.750 [2024-09-28 01:29:41.439176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:45.750 [2024-09-28 01:29:41.439185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.308 ms 00:16:45.750 [2024-09-28 01:29:41.439204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.450523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.750 [2024-09-28 01:29:41.450551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:45.750 [2024-09-28 01:29:41.450561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.257 ms 00:16:45.750 [2024-09-28 01:29:41.450568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.451164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.750 [2024-09-28 01:29:41.451187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:45.750 [2024-09-28 01:29:41.451221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:16:45.750 [2024-09-28 01:29:41.451229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.504740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.750 [2024-09-28 01:29:41.504795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:45.750 [2024-09-28 01:29:41.504808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.487 ms 00:16:45.750 [2024-09-28 01:29:41.504816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.515065] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:45.750 [2024-09-28 01:29:41.528555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.750 [2024-09-28 01:29:41.528710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:45.750 [2024-09-28 01:29:41.528728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.621 ms 00:16:45.750 [2024-09-28 01:29:41.528735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.528828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.750 [2024-09-28 01:29:41.528839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:45.750 [2024-09-28 01:29:41.528848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:45.750 [2024-09-28 01:29:41.528855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.528903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.750 [2024-09-28 01:29:41.528914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:45.750 [2024-09-28 01:29:41.528923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:45.750 [2024-09-28 01:29:41.528930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.528949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.750 [2024-09-28 01:29:41.528957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:45.750 [2024-09-28 01:29:41.528964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:45.750 [2024-09-28 01:29:41.528971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.529000] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:45.750 [2024-09-28 01:29:41.529010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.750 [2024-09-28 01:29:41.529019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:45.750 [2024-09-28 01:29:41.529027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:45.750 [2024-09-28 01:29:41.529033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.750 [2024-09-28 01:29:41.552002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.751 [2024-09-28 01:29:41.552125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:45.751 [2024-09-28 01:29:41.552142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.951 ms 00:16:45.751 [2024-09-28 01:29:41.552150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.751 [2024-09-28 01:29:41.552263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.751 [2024-09-28 01:29:41.552274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:45.751 [2024-09-28 01:29:41.552283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:45.751 [2024-09-28 01:29:41.552290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.751 [2024-09-28 01:29:41.553075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:45.751 [2024-09-28 01:29:41.556153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.681 ms, result 0 00:16:45.751 [2024-09-28 01:29:41.556939] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:45.751 [2024-09-28 01:29:41.569707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:45.751  Copying: 4096/4096 [kB] (average 40 MBps)[2024-09-28 01:29:41.672478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:45.751 [2024-09-28 01:29:41.681542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:45.751 [2024-09-28 01:29:41.681577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:45.751 [2024-09-28 01:29:41.681590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:45.751 [2024-09-28 01:29:41.681598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:45.751 [2024-09-28 01:29:41.681619] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:46.011 [2024-09-28 01:29:41.684180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.684214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:46.011 [2024-09-28 01:29:41.684224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:16:46.011 [2024-09-28 01:29:41.684232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.685677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.685711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:46.011 [2024-09-28 01:29:41.685720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:16:46.011 [2024-09-28 01:29:41.685728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.689703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.689727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:46.011 [2024-09-28 01:29:41.689736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.961 ms 00:16:46.011 [2024-09-28 01:29:41.689744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.696663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.696687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:46.011 [2024-09-28 01:29:41.696701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.894 ms 00:16:46.011 [2024-09-28 01:29:41.696709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.719303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.719332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:46.011 [2024-09-28 01:29:41.719343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.542 ms 00:16:46.011 [2024-09-28 01:29:41.719351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.733325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.733354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:46.011 [2024-09-28 01:29:41.733366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.942 ms 00:16:46.011 [2024-09-28 01:29:41.733374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.733490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.733500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:46.011 [2024-09-28 01:29:41.733508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:46.011 [2024-09-28 01:29:41.733515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.756595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.756635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:46.011 [2024-09-28 01:29:41.756645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.061 ms 00:16:46.011 [2024-09-28 01:29:41.756652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.778544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.778663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:46.011 [2024-09-28 01:29:41.778679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.860 ms 00:16:46.011 [2024-09-28 01:29:41.778686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.800702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.800731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:46.011 [2024-09-28 01:29:41.800741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.986 ms 00:16:46.011 [2024-09-28 01:29:41.800748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.822684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.011 [2024-09-28 01:29:41.822711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:46.011 [2024-09-28 01:29:41.822721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.871 ms 00:16:46.011 [2024-09-28 01:29:41.822729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.011 [2024-09-28 01:29:41.822760] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:46.011 [2024-09-28 01:29:41.822774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:46.011 [2024-09-28 01:29:41.822959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.822967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.822975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.822982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.822991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.822998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:46.012 [2024-09-28 01:29:41.823564] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:46.012 [2024-09-28 01:29:41.823572] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94572d0-4efc-4bf8-b84f-0a6faae4c084 00:16:46.012 [2024-09-28 01:29:41.823580] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:46.012 [2024-09-28 01:29:41.823587] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:46.012 [2024-09-28 01:29:41.823596] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:46.012 [2024-09-28 01:29:41.823604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:46.012 [2024-09-28 01:29:41.823611] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:46.012 [2024-09-28 01:29:41.823618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:46.012 [2024-09-28 01:29:41.823626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:46.012 [2024-09-28 01:29:41.823632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:46.012 [2024-09-28 01:29:41.823638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:46.012 [2024-09-28 01:29:41.823646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.012 [2024-09-28 01:29:41.823653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:46.012 [2024-09-28 01:29:41.823661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:16:46.012 [2024-09-28 01:29:41.823667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.012 [2024-09-28 01:29:41.835606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.012 [2024-09-28 01:29:41.835637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:46.012 [2024-09-28 01:29:41.835647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.922 ms 00:16:46.012 [2024-09-28 01:29:41.835654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.012 [2024-09-28 01:29:41.835988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.012 [2024-09-28 01:29:41.836002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:46.013 [2024-09-28 01:29:41.836010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:16:46.013 [2024-09-28 01:29:41.836017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.013 [2024-09-28 01:29:41.865797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.013 [2024-09-28 01:29:41.865830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:46.013 [2024-09-28 01:29:41.865840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.013 [2024-09-28 01:29:41.865848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.013 [2024-09-28 01:29:41.865912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.013 [2024-09-28 01:29:41.865920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:46.013 [2024-09-28 01:29:41.865928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.013 [2024-09-28 01:29:41.865935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.013 [2024-09-28 01:29:41.865974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.013 [2024-09-28 01:29:41.865985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:46.013 [2024-09-28 01:29:41.865993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.013 [2024-09-28 01:29:41.866001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.013 [2024-09-28 01:29:41.866018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.013 [2024-09-28 01:29:41.866025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:46.013 [2024-09-28 01:29:41.866032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.013 [2024-09-28 01:29:41.866039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.013 [2024-09-28 01:29:41.940276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.272 [2024-09-28 01:29:41.940459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:46.272 [2024-09-28 01:29:41.940476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.272 [2024-09-28 01:29:41.940484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.272 [2024-09-28 01:29:42.002251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.272 [2024-09-28 01:29:42.002295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:46.272 [2024-09-28 01:29:42.002305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.272 [2024-09-28 01:29:42.002313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.272 [2024-09-28 01:29:42.002363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.272 [2024-09-28 01:29:42.002372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:46.272 [2024-09-28 01:29:42.002385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.272 [2024-09-28 01:29:42.002392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.272 [2024-09-28 01:29:42.002420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.272 [2024-09-28 01:29:42.002428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:46.272 [2024-09-28 01:29:42.002436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.272 [2024-09-28 01:29:42.002443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.272 [2024-09-28 01:29:42.002529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.272 [2024-09-28 01:29:42.002538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:46.272 [2024-09-28 01:29:42.002546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.272 [2024-09-28 01:29:42.002556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.272 [2024-09-28 01:29:42.002586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.272 [2024-09-28 01:29:42.002596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:46.272 [2024-09-28 01:29:42.002603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.272 [2024-09-28 01:29:42.002611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.272 [2024-09-28 01:29:42.002643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.272 [2024-09-28 01:29:42.002651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:46.272 [2024-09-28 01:29:42.002658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.272 [2024-09-28 01:29:42.002668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.272 [2024-09-28 01:29:42.002710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:46.272 [2024-09-28 01:29:42.002719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:46.272 [2024-09-28 01:29:42.002726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:46.272 [2024-09-28 01:29:42.002734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.272 [2024-09-28 01:29:42.002864] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 321.308 ms, result 0 00:16:46.839 00:16:46.839 00:16:47.098 01:29:42 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74379 00:16:47.098 01:29:42 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74379 00:16:47.098 01:29:42 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74379 ']' 00:16:47.098 01:29:42 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.098 01:29:42 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.098 01:29:42 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.098 01:29:42 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:47.098 01:29:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.098 01:29:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:47.098 [2024-09-28 01:29:42.864890] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:47.098 [2024-09-28 01:29:42.865013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74379 ] 00:16:47.099 [2024-09-28 01:29:43.016612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.357 [2024-09-28 01:29:43.191831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.924 01:29:43 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.924 01:29:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:47.924 01:29:43 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:48.182 [2024-09-28 01:29:43.990572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:48.182 [2024-09-28 01:29:43.990630] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:48.442 [2024-09-28 01:29:44.160970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.442 [2024-09-28 01:29:44.161166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:48.442 [2024-09-28 01:29:44.161189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:48.442 [2024-09-28 01:29:44.161215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.442 [2024-09-28 01:29:44.163785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.442 [2024-09-28 01:29:44.163818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:48.442 [2024-09-28 01:29:44.163832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:16:48.442 [2024-09-28 01:29:44.163839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.442 [2024-09-28 01:29:44.163939] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:48.442 [2024-09-28 01:29:44.164712] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:48.442 [2024-09-28 01:29:44.164741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.442 [2024-09-28 01:29:44.164749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:48.442 [2024-09-28 01:29:44.164759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:16:48.442 [2024-09-28 01:29:44.164766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.442 [2024-09-28 01:29:44.165806] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:48.442 [2024-09-28 01:29:44.177987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.442 [2024-09-28 01:29:44.178022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:48.442 [2024-09-28 01:29:44.178034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.184 ms 00:16:48.442 [2024-09-28 01:29:44.178044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.442 [2024-09-28 01:29:44.178129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.442 [2024-09-28 01:29:44.178144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:48.442 [2024-09-28 01:29:44.178152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:48.442 [2024-09-28 01:29:44.178161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.442 [2024-09-28 01:29:44.182687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.442 [2024-09-28 01:29:44.182721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:48.442 [2024-09-28 01:29:44.182731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.463 ms 00:16:48.442 [2024-09-28 01:29:44.182739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.442 [2024-09-28 01:29:44.182837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.442 [2024-09-28 01:29:44.182849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:48.442 [2024-09-28 01:29:44.182858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:48.442 [2024-09-28 01:29:44.182867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.442 [2024-09-28 01:29:44.182890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.442 [2024-09-28 01:29:44.182901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:48.442 [2024-09-28 01:29:44.182909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:48.442 [2024-09-28 01:29:44.182917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.442 [2024-09-28 01:29:44.182940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:48.442 [2024-09-28 01:29:44.186208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.442 [2024-09-28 01:29:44.186233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:48.442 [2024-09-28 01:29:44.186244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:16:48.442 [2024-09-28 01:29:44.186253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.443 [2024-09-28 01:29:44.186290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.443 [2024-09-28 01:29:44.186298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:48.443 [2024-09-28 01:29:44.186307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:48.443 [2024-09-28 01:29:44.186314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.443 [2024-09-28 01:29:44.186336] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:48.443 [2024-09-28 01:29:44.186353] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:48.443 [2024-09-28 01:29:44.186392] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:48.443 [2024-09-28 01:29:44.186409] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:48.443 [2024-09-28 01:29:44.186513] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:48.443 [2024-09-28 01:29:44.186523] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:48.443 [2024-09-28 01:29:44.186535] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:48.443 [2024-09-28 01:29:44.186545] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:48.443 [2024-09-28 01:29:44.186555] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:48.443 [2024-09-28 01:29:44.186563] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:48.443 [2024-09-28 01:29:44.186571] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:48.443 [2024-09-28 01:29:44.186578] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:48.443 [2024-09-28 01:29:44.186588] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:48.443 [2024-09-28 01:29:44.186597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.443 [2024-09-28 01:29:44.186605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:48.443 [2024-09-28 01:29:44.186613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:16:48.443 [2024-09-28 01:29:44.186621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.443 [2024-09-28 01:29:44.186708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.443 [2024-09-28 01:29:44.186717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:48.443 [2024-09-28 01:29:44.186725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:48.443 [2024-09-28 01:29:44.186733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.443 [2024-09-28 01:29:44.186843] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:48.443 [2024-09-28 01:29:44.186856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:48.443 [2024-09-28 01:29:44.186864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.443 [2024-09-28 01:29:44.186873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.443 [2024-09-28 01:29:44.186881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:48.443 [2024-09-28 01:29:44.186889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:48.443 [2024-09-28 01:29:44.186896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:48.443 [2024-09-28 01:29:44.186908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:48.443 [2024-09-28 01:29:44.186915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:48.443 [2024-09-28 01:29:44.186923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.443 [2024-09-28 01:29:44.186930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:48.443 [2024-09-28 01:29:44.186938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:48.443 [2024-09-28 01:29:44.186944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.443 [2024-09-28 01:29:44.186953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:48.443 [2024-09-28 01:29:44.186959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:48.443 [2024-09-28 01:29:44.186967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.443 [2024-09-28 01:29:44.186974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:48.443 [2024-09-28 01:29:44.186982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:48.443 [2024-09-28 01:29:44.186993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.443 [2024-09-28 01:29:44.187001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:48.443 [2024-09-28 01:29:44.187008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:48.443 [2024-09-28 01:29:44.187016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.443 [2024-09-28 01:29:44.187022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:48.443 [2024-09-28 01:29:44.187031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:48.443 [2024-09-28 01:29:44.187038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.443 [2024-09-28 01:29:44.187046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:48.443 [2024-09-28 01:29:44.187053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:48.443 [2024-09-28 01:29:44.187061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.443 [2024-09-28 01:29:44.187068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:48.443 [2024-09-28 01:29:44.187079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:48.443 [2024-09-28 01:29:44.187085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.443 [2024-09-28 01:29:44.187094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:48.443 [2024-09-28 01:29:44.187101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:48.443 [2024-09-28 01:29:44.187108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.443 [2024-09-28 01:29:44.187115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:48.443 [2024-09-28 01:29:44.187123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:48.443 [2024-09-28 01:29:44.187130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.443 [2024-09-28 01:29:44.187138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:48.443 [2024-09-28 01:29:44.187144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:48.443 [2024-09-28 01:29:44.187153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.443 [2024-09-28 01:29:44.187160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:48.443 [2024-09-28 01:29:44.187168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:48.443 [2024-09-28 01:29:44.187175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.443 [2024-09-28 01:29:44.187183] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:48.443 [2024-09-28 01:29:44.187414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:48.443 [2024-09-28 01:29:44.187453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.443 [2024-09-28 01:29:44.187474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.443 [2024-09-28 01:29:44.187495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:48.443 [2024-09-28 01:29:44.187546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:48.443 [2024-09-28 01:29:44.187570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:48.443 [2024-09-28 01:29:44.187590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:48.443 [2024-09-28 01:29:44.187610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:48.443 [2024-09-28 01:29:44.187663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:48.443 [2024-09-28 01:29:44.187689] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:48.443 [2024-09-28 01:29:44.187720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.443 [2024-09-28 01:29:44.187753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:48.443 [2024-09-28 01:29:44.187826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:48.443 [2024-09-28 01:29:44.187857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:48.443 [2024-09-28 01:29:44.187927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:48.443 [2024-09-28 01:29:44.187980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:48.443 [2024-09-28 01:29:44.188009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:48.443 [2024-09-28 01:29:44.188038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:48.443 [2024-09-28 01:29:44.188098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:48.443 [2024-09-28 01:29:44.188294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:48.443 [2024-09-28 01:29:44.188347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:48.443 [2024-09-28 01:29:44.188378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:48.443 [2024-09-28 01:29:44.188439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:48.443 [2024-09-28 01:29:44.188469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:48.443 [2024-09-28 01:29:44.188519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:48.443 [2024-09-28 01:29:44.188543] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:48.443 [2024-09-28 01:29:44.188551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.443 [2024-09-28 01:29:44.188564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:48.443 [2024-09-28 01:29:44.188572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:48.444 [2024-09-28 01:29:44.188580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:48.444 [2024-09-28 01:29:44.188587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:48.444 [2024-09-28 01:29:44.188597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.188604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:48.444 [2024-09-28 01:29:44.188614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.820 ms 00:16:48.444 [2024-09-28 01:29:44.188621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.213799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.213831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:48.444 [2024-09-28 01:29:44.213843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.097 ms 00:16:48.444 [2024-09-28 01:29:44.213850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.213963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.213972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:48.444 [2024-09-28 01:29:44.213982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:48.444 [2024-09-28 01:29:44.213990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.251727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.251763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:48.444 [2024-09-28 01:29:44.251779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.713 ms 00:16:48.444 [2024-09-28 01:29:44.251787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.251858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.251869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:48.444 [2024-09-28 01:29:44.251879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:48.444 [2024-09-28 01:29:44.251888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.252214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.252229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:48.444 [2024-09-28 01:29:44.252239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:16:48.444 [2024-09-28 01:29:44.252246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.252398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.252408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:48.444 [2024-09-28 01:29:44.252417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:16:48.444 [2024-09-28 01:29:44.252427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.266497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.266529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:48.444 [2024-09-28 01:29:44.266542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.046 ms 00:16:48.444 [2024-09-28 01:29:44.266553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.279153] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:48.444 [2024-09-28 01:29:44.279183] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:48.444 [2024-09-28 01:29:44.279207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.279216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:48.444 [2024-09-28 01:29:44.279225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.549 ms 00:16:48.444 [2024-09-28 01:29:44.279232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.303185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.303225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:48.444 [2024-09-28 01:29:44.303237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.872 ms 00:16:48.444 [2024-09-28 01:29:44.303250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.314687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.314798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:48.444 [2024-09-28 01:29:44.314818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.370 ms 00:16:48.444 [2024-09-28 01:29:44.314825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.326085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.326181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:48.444 [2024-09-28 01:29:44.326212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.203 ms 00:16:48.444 [2024-09-28 01:29:44.326219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.444 [2024-09-28 01:29:44.326823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.444 [2024-09-28 01:29:44.326835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:48.444 [2024-09-28 01:29:44.326845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:16:48.444 [2024-09-28 01:29:44.326854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.703 [2024-09-28 01:29:44.380777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.703 [2024-09-28 01:29:44.380832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:48.703 [2024-09-28 01:29:44.380848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.899 ms 00:16:48.703 [2024-09-28 01:29:44.380858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.703 [2024-09-28 01:29:44.391213] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:48.703 [2024-09-28 01:29:44.404523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.703 [2024-09-28 01:29:44.404701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:48.703 [2024-09-28 01:29:44.404718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.570 ms 00:16:48.703 [2024-09-28 01:29:44.404727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.703 [2024-09-28 01:29:44.404813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.703 [2024-09-28 01:29:44.404825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:48.703 [2024-09-28 01:29:44.404834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:48.703 [2024-09-28 01:29:44.404843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.703 [2024-09-28 01:29:44.404890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.703 [2024-09-28 01:29:44.404900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:48.703 [2024-09-28 01:29:44.404908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:48.703 [2024-09-28 01:29:44.404917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.703 [2024-09-28 01:29:44.404939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.703 [2024-09-28 01:29:44.404948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:48.703 [2024-09-28 01:29:44.404958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:48.703 [2024-09-28 01:29:44.404969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.703 [2024-09-28 01:29:44.404998] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:48.703 [2024-09-28 01:29:44.405012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.703 [2024-09-28 01:29:44.405019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:48.703 [2024-09-28 01:29:44.405029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:48.703 [2024-09-28 01:29:44.405035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.703 [2024-09-28 01:29:44.427714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.703 [2024-09-28 01:29:44.427825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:48.703 [2024-09-28 01:29:44.427845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.654 ms 00:16:48.703 [2024-09-28 01:29:44.427855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.703 [2024-09-28 01:29:44.427939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.703 [2024-09-28 01:29:44.427949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:48.703 [2024-09-28 01:29:44.427959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:48.703 [2024-09-28 01:29:44.427966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.703 [2024-09-28 01:29:44.428709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:48.703 [2024-09-28 01:29:44.431661] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 267.471 ms, result 0 00:16:48.703 [2024-09-28 01:29:44.432508] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:48.703 Some configs were skipped because the RPC state that can call them passed over. 00:16:48.703 01:29:44 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:48.962 [2024-09-28 01:29:44.662802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.962 [2024-09-28 01:29:44.662950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:48.962 [2024-09-28 01:29:44.663010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:16:48.962 [2024-09-28 01:29:44.663036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.962 [2024-09-28 01:29:44.663130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.576 ms, result 0 00:16:48.962 true 00:16:48.962 01:29:44 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:48.962 [2024-09-28 01:29:44.862948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.962 [2024-09-28 01:29:44.863103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:48.962 [2024-09-28 01:29:44.863157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:16:48.962 [2024-09-28 01:29:44.863179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.962 [2024-09-28 01:29:44.863249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.442 ms, result 0 00:16:48.962 true 00:16:48.962 01:29:44 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74379 00:16:48.962 01:29:44 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74379 ']' 00:16:48.962 01:29:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74379 00:16:48.962 01:29:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:48.962 01:29:44 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.962 01:29:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74379 00:16:49.220 01:29:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.220 killing process with pid 74379 00:16:49.220 01:29:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.220 01:29:44 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74379' 00:16:49.220 01:29:44 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74379 00:16:49.220 01:29:44 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74379 00:16:49.789 [2024-09-28 01:29:45.553495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.553543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:49.789 [2024-09-28 01:29:45.553554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:49.789 [2024-09-28 01:29:45.553561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.553578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:49.789 [2024-09-28 01:29:45.555693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.555718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:49.789 [2024-09-28 01:29:45.555729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:16:49.789 [2024-09-28 01:29:45.555735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.555953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.555960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:49.789 [2024-09-28 01:29:45.555970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:16:49.789 [2024-09-28 01:29:45.555976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.559035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.559057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:49.789 [2024-09-28 01:29:45.559066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:16:49.789 [2024-09-28 01:29:45.559072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.564312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.564451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:49.789 [2024-09-28 01:29:45.564468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.212 ms 00:16:49.789 [2024-09-28 01:29:45.564475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.572035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.572128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:49.789 [2024-09-28 01:29:45.572143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.504 ms 00:16:49.789 [2024-09-28 01:29:45.572149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.583820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.584137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:49.789 [2024-09-28 01:29:45.584384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.623 ms 00:16:49.789 [2024-09-28 01:29:45.584570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.585167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.585435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:49.789 [2024-09-28 01:29:45.585607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:16:49.789 [2024-09-28 01:29:45.585793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.598201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.598303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:49.789 [2024-09-28 01:29:45.598354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.203 ms 00:16:49.789 [2024-09-28 01:29:45.598375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.607685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.607783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:49.789 [2024-09-28 01:29:45.607840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.202 ms 00:16:49.789 [2024-09-28 01:29:45.607862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.616669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.616759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:49.789 [2024-09-28 01:29:45.616844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.762 ms 00:16:49.789 [2024-09-28 01:29:45.616865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.625804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.789 [2024-09-28 01:29:45.625893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:49.789 [2024-09-28 01:29:45.625945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.855 ms 00:16:49.789 [2024-09-28 01:29:45.625966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.789 [2024-09-28 01:29:45.626008] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:49.789 [2024-09-28 01:29:45.626338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.626980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:49.789 [2024-09-28 01:29:45.627726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.627994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:49.790 [2024-09-28 01:29:45.628210] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:49.790 [2024-09-28 01:29:45.628222] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94572d0-4efc-4bf8-b84f-0a6faae4c084 00:16:49.790 [2024-09-28 01:29:45.628230] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:49.790 [2024-09-28 01:29:45.628239] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:49.790 [2024-09-28 01:29:45.628246] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:49.790 [2024-09-28 01:29:45.628256] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:49.790 [2024-09-28 01:29:45.628268] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:49.790 [2024-09-28 01:29:45.628277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:49.790 [2024-09-28 01:29:45.628286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:49.790 [2024-09-28 01:29:45.628294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:49.790 [2024-09-28 01:29:45.628300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:49.790 [2024-09-28 01:29:45.628311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.790 [2024-09-28 01:29:45.628319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:49.790 [2024-09-28 01:29:45.628329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.304 ms 00:16:49.790 [2024-09-28 01:29:45.628336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.790 [2024-09-28 01:29:45.640946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.790 [2024-09-28 01:29:45.641039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:49.790 [2024-09-28 01:29:45.641099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.547 ms 00:16:49.790 [2024-09-28 01:29:45.641122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.790 [2024-09-28 01:29:45.641522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.790 [2024-09-28 01:29:45.641592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:49.790 [2024-09-28 01:29:45.641643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:16:49.790 [2024-09-28 01:29:45.641665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.790 [2024-09-28 01:29:45.680613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.790 [2024-09-28 01:29:45.680719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.790 [2024-09-28 01:29:45.680773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.790 [2024-09-28 01:29:45.680807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.790 [2024-09-28 01:29:45.680925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.790 [2024-09-28 01:29:45.680987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.790 [2024-09-28 01:29:45.681087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.790 [2024-09-28 01:29:45.681110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.790 [2024-09-28 01:29:45.681166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.790 [2024-09-28 01:29:45.681190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.790 [2024-09-28 01:29:45.681320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.790 [2024-09-28 01:29:45.681343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.790 [2024-09-28 01:29:45.681380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:49.790 [2024-09-28 01:29:45.681400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.790 [2024-09-28 01:29:45.681421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:49.790 [2024-09-28 01:29:45.681472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.049 [2024-09-28 01:29:45.756592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.049 [2024-09-28 01:29:45.756749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:50.049 [2024-09-28 01:29:45.756814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.049 [2024-09-28 01:29:45.756840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.049 [2024-09-28 01:29:45.819474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.050 [2024-09-28 01:29:45.819663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:50.050 [2024-09-28 01:29:45.819737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.050 [2024-09-28 01:29:45.819760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.050 [2024-09-28 01:29:45.819865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.050 [2024-09-28 01:29:45.819890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:50.050 [2024-09-28 01:29:45.819951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.050 [2024-09-28 01:29:45.819974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.050 [2024-09-28 01:29:45.820017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.050 [2024-09-28 01:29:45.820040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:50.050 [2024-09-28 01:29:45.820060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.050 [2024-09-28 01:29:45.820106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.050 [2024-09-28 01:29:45.820231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.050 [2024-09-28 01:29:45.820342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:50.050 [2024-09-28 01:29:45.820415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.050 [2024-09-28 01:29:45.820438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.050 [2024-09-28 01:29:45.820489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.050 [2024-09-28 01:29:45.820511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:50.050 [2024-09-28 01:29:45.820594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.050 [2024-09-28 01:29:45.820616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.050 [2024-09-28 01:29:45.820666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.050 [2024-09-28 01:29:45.820688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:50.050 [2024-09-28 01:29:45.820710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.050 [2024-09-28 01:29:45.820768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.050 [2024-09-28 01:29:45.820837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.050 [2024-09-28 01:29:45.820864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:50.050 [2024-09-28 01:29:45.820885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.050 [2024-09-28 01:29:45.820937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.050 [2024-09-28 01:29:45.821083] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 267.564 ms, result 0 00:16:50.985 01:29:46 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:50.985 [2024-09-28 01:29:46.617330] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:50.985 [2024-09-28 01:29:46.617632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74426 ] 00:16:50.985 [2024-09-28 01:29:46.761787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.985 [2024-09-28 01:29:46.906103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.243 [2024-09-28 01:29:47.112598] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:51.243 [2024-09-28 01:29:47.112646] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:51.503 [2024-09-28 01:29:47.260481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.503 [2024-09-28 01:29:47.260524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:51.503 [2024-09-28 01:29:47.260536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:51.503 [2024-09-28 01:29:47.260542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.503 [2024-09-28 01:29:47.262644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.503 [2024-09-28 01:29:47.262675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:51.503 [2024-09-28 01:29:47.262684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.089 ms 00:16:51.503 [2024-09-28 01:29:47.262692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.503 [2024-09-28 01:29:47.262751] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:51.503 [2024-09-28 01:29:47.263318] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:51.503 [2024-09-28 01:29:47.263335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.503 [2024-09-28 01:29:47.263343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:51.503 [2024-09-28 01:29:47.263350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:16:51.503 [2024-09-28 01:29:47.263355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.503 [2024-09-28 01:29:47.264319] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:51.503 [2024-09-28 01:29:47.273813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.503 [2024-09-28 01:29:47.273839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:51.503 [2024-09-28 01:29:47.273847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.495 ms 00:16:51.503 [2024-09-28 01:29:47.273854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.503 [2024-09-28 01:29:47.273955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.503 [2024-09-28 01:29:47.273964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:51.503 [2024-09-28 01:29:47.273973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:51.503 [2024-09-28 01:29:47.273979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.503 [2024-09-28 01:29:47.278302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.503 [2024-09-28 01:29:47.278327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:51.503 [2024-09-28 01:29:47.278335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.293 ms 00:16:51.503 [2024-09-28 01:29:47.278340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.503 [2024-09-28 01:29:47.278417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.503 [2024-09-28 01:29:47.278427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:51.503 [2024-09-28 01:29:47.278433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:51.503 [2024-09-28 01:29:47.278439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.503 [2024-09-28 01:29:47.278459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.503 [2024-09-28 01:29:47.278465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:51.503 [2024-09-28 01:29:47.278471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:51.503 [2024-09-28 01:29:47.278476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.503 [2024-09-28 01:29:47.278495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:51.503 [2024-09-28 01:29:47.281148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.503 [2024-09-28 01:29:47.281171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:51.503 [2024-09-28 01:29:47.281178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:16:51.503 [2024-09-28 01:29:47.281185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.503 [2024-09-28 01:29:47.281231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.503 [2024-09-28 01:29:47.281241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:51.503 [2024-09-28 01:29:47.281247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:51.503 [2024-09-28 01:29:47.281253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.503 [2024-09-28 01:29:47.281266] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:51.503 [2024-09-28 01:29:47.281279] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:51.503 [2024-09-28 01:29:47.281306] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:51.503 [2024-09-28 01:29:47.281318] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:51.504 [2024-09-28 01:29:47.281406] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:51.504 [2024-09-28 01:29:47.281414] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:51.504 [2024-09-28 01:29:47.281422] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:51.504 [2024-09-28 01:29:47.281430] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281437] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281443] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:51.504 [2024-09-28 01:29:47.281448] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:51.504 [2024-09-28 01:29:47.281457] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:51.504 [2024-09-28 01:29:47.281463] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:51.504 [2024-09-28 01:29:47.281469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.504 [2024-09-28 01:29:47.281476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:51.504 [2024-09-28 01:29:47.281483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:16:51.504 [2024-09-28 01:29:47.281488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.504 [2024-09-28 01:29:47.281555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.504 [2024-09-28 01:29:47.281561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:51.504 [2024-09-28 01:29:47.281567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:51.504 [2024-09-28 01:29:47.281572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.504 [2024-09-28 01:29:47.281645] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:51.504 [2024-09-28 01:29:47.281652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:51.504 [2024-09-28 01:29:47.281660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:51.504 [2024-09-28 01:29:47.281677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:51.504 [2024-09-28 01:29:47.281692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.504 [2024-09-28 01:29:47.281703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:51.504 [2024-09-28 01:29:47.281712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:51.504 [2024-09-28 01:29:47.281717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.504 [2024-09-28 01:29:47.281722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:51.504 [2024-09-28 01:29:47.281727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:51.504 [2024-09-28 01:29:47.281732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:51.504 [2024-09-28 01:29:47.281742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:51.504 [2024-09-28 01:29:47.281759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:51.504 [2024-09-28 01:29:47.281775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:51.504 [2024-09-28 01:29:47.281790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:51.504 [2024-09-28 01:29:47.281804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:51.504 [2024-09-28 01:29:47.281819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.504 [2024-09-28 01:29:47.281829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:51.504 [2024-09-28 01:29:47.281834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:51.504 [2024-09-28 01:29:47.281839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.504 [2024-09-28 01:29:47.281843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:51.504 [2024-09-28 01:29:47.281848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:51.504 [2024-09-28 01:29:47.281853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:51.504 [2024-09-28 01:29:47.281863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:51.504 [2024-09-28 01:29:47.281868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281873] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:51.504 [2024-09-28 01:29:47.281878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:51.504 [2024-09-28 01:29:47.281884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.504 [2024-09-28 01:29:47.281895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:51.504 [2024-09-28 01:29:47.281901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:51.504 [2024-09-28 01:29:47.281906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:51.504 [2024-09-28 01:29:47.281912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:51.504 [2024-09-28 01:29:47.281917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:51.504 [2024-09-28 01:29:47.281922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:51.504 [2024-09-28 01:29:47.281928] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:51.504 [2024-09-28 01:29:47.281938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.504 [2024-09-28 01:29:47.281945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:51.504 [2024-09-28 01:29:47.281950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:51.504 [2024-09-28 01:29:47.281956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:51.504 [2024-09-28 01:29:47.281961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:51.504 [2024-09-28 01:29:47.281966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:51.504 [2024-09-28 01:29:47.281972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:51.504 [2024-09-28 01:29:47.281977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:51.504 [2024-09-28 01:29:47.281983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:51.504 [2024-09-28 01:29:47.281988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:51.504 [2024-09-28 01:29:47.281994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:51.504 [2024-09-28 01:29:47.281999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:51.504 [2024-09-28 01:29:47.282004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:51.504 [2024-09-28 01:29:47.282009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:51.504 [2024-09-28 01:29:47.282015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:51.504 [2024-09-28 01:29:47.282021] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:51.504 [2024-09-28 01:29:47.282027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.504 [2024-09-28 01:29:47.282033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:51.504 [2024-09-28 01:29:47.282038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:51.504 [2024-09-28 01:29:47.282044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:51.504 [2024-09-28 01:29:47.282050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:51.504 [2024-09-28 01:29:47.282056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.504 [2024-09-28 01:29:47.282064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:51.504 [2024-09-28 01:29:47.282070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:16:51.504 [2024-09-28 01:29:47.282075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.504 [2024-09-28 01:29:47.316718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.504 [2024-09-28 01:29:47.316771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:51.505 [2024-09-28 01:29:47.316794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.603 ms 00:16:51.505 [2024-09-28 01:29:47.316806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.316978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.316992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:51.505 [2024-09-28 01:29:47.317004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:51.505 [2024-09-28 01:29:47.317013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.341289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.341319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:51.505 [2024-09-28 01:29:47.341328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.250 ms 00:16:51.505 [2024-09-28 01:29:47.341334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.341393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.341401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:51.505 [2024-09-28 01:29:47.341407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:51.505 [2024-09-28 01:29:47.341414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.341692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.341705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:51.505 [2024-09-28 01:29:47.341711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:16:51.505 [2024-09-28 01:29:47.341718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.341826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.341833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:51.505 [2024-09-28 01:29:47.341840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:51.505 [2024-09-28 01:29:47.341845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.351947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.351976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:51.505 [2024-09-28 01:29:47.351984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.086 ms 00:16:51.505 [2024-09-28 01:29:47.351990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.362107] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:51.505 [2024-09-28 01:29:47.362244] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:51.505 [2024-09-28 01:29:47.362257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.362263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:51.505 [2024-09-28 01:29:47.362270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.170 ms 00:16:51.505 [2024-09-28 01:29:47.362275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.381160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.381217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:51.505 [2024-09-28 01:29:47.381233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.615 ms 00:16:51.505 [2024-09-28 01:29:47.381250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.390177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.390214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:51.505 [2024-09-28 01:29:47.390223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.861 ms 00:16:51.505 [2024-09-28 01:29:47.390229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.399072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.399095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:51.505 [2024-09-28 01:29:47.399102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.799 ms 00:16:51.505 [2024-09-28 01:29:47.399107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.505 [2024-09-28 01:29:47.399580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.505 [2024-09-28 01:29:47.399599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:51.505 [2024-09-28 01:29:47.399606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:16:51.505 [2024-09-28 01:29:47.399612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.764 [2024-09-28 01:29:47.443507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.764 [2024-09-28 01:29:47.443551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:51.764 [2024-09-28 01:29:47.443562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.875 ms 00:16:51.764 [2024-09-28 01:29:47.443569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.764 [2024-09-28 01:29:47.451665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:51.764 [2024-09-28 01:29:47.463448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.764 [2024-09-28 01:29:47.463483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:51.764 [2024-09-28 01:29:47.463494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.785 ms 00:16:51.764 [2024-09-28 01:29:47.463500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.764 [2024-09-28 01:29:47.463588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.764 [2024-09-28 01:29:47.463597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:51.764 [2024-09-28 01:29:47.463604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:51.764 [2024-09-28 01:29:47.463610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.764 [2024-09-28 01:29:47.463652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.764 [2024-09-28 01:29:47.463661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:51.764 [2024-09-28 01:29:47.463667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:51.764 [2024-09-28 01:29:47.463673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.764 [2024-09-28 01:29:47.463689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.764 [2024-09-28 01:29:47.463696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:51.764 [2024-09-28 01:29:47.463702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:51.764 [2024-09-28 01:29:47.463708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.764 [2024-09-28 01:29:47.463731] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:51.764 [2024-09-28 01:29:47.463739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.764 [2024-09-28 01:29:47.463747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:51.764 [2024-09-28 01:29:47.463754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:51.764 [2024-09-28 01:29:47.463759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.764 [2024-09-28 01:29:47.481698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.764 [2024-09-28 01:29:47.481729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:51.764 [2024-09-28 01:29:47.481738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.924 ms 00:16:51.764 [2024-09-28 01:29:47.481745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.764 [2024-09-28 01:29:47.481819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.764 [2024-09-28 01:29:47.481828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:51.764 [2024-09-28 01:29:47.481835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:51.765 [2024-09-28 01:29:47.481841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.765 [2024-09-28 01:29:47.482822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:51.765 [2024-09-28 01:29:47.485242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.098 ms, result 0 00:16:51.765 [2024-09-28 01:29:47.486000] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:51.765 [2024-09-28 01:29:47.496896] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:57.699  Copying: 49/256 [MB] (49 MBps) Copying: 91/256 [MB] (42 MBps) Copying: 136/256 [MB] (45 MBps) Copying: 180/256 [MB] (43 MBps) Copying: 229/256 [MB] (49 MBps) Copying: 256/256 [MB] (average 45 MBps)[2024-09-28 01:29:53.572232] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:57.699 [2024-09-28 01:29:53.584088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.700 [2024-09-28 01:29:53.584228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:57.700 [2024-09-28 01:29:53.584247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:57.700 [2024-09-28 01:29:53.584256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.700 [2024-09-28 01:29:53.584281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:57.700 [2024-09-28 01:29:53.586846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.700 [2024-09-28 01:29:53.586873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:57.700 [2024-09-28 01:29:53.586883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:16:57.700 [2024-09-28 01:29:53.586891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.700 [2024-09-28 01:29:53.587155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.700 [2024-09-28 01:29:53.587171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:57.700 [2024-09-28 01:29:53.587179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:16:57.700 [2024-09-28 01:29:53.587187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.700 [2024-09-28 01:29:53.590870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.700 [2024-09-28 01:29:53.590889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:57.700 [2024-09-28 01:29:53.590899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.659 ms 00:16:57.700 [2024-09-28 01:29:53.590908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.700 [2024-09-28 01:29:53.598893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.700 [2024-09-28 01:29:53.598920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:57.700 [2024-09-28 01:29:53.598934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.968 ms 00:16:57.700 [2024-09-28 01:29:53.598943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.700 [2024-09-28 01:29:53.626537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.700 [2024-09-28 01:29:53.626574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:57.700 [2024-09-28 01:29:53.626586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.535 ms 00:16:57.700 [2024-09-28 01:29:53.626593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.959 [2024-09-28 01:29:53.641666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.959 [2024-09-28 01:29:53.641698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:57.959 [2024-09-28 01:29:53.641709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.033 ms 00:16:57.959 [2024-09-28 01:29:53.641717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.959 [2024-09-28 01:29:53.641852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.959 [2024-09-28 01:29:53.641862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:57.959 [2024-09-28 01:29:53.641871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:57.959 [2024-09-28 01:29:53.641879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.959 [2024-09-28 01:29:53.664939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.959 [2024-09-28 01:29:53.664971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:57.959 [2024-09-28 01:29:53.664981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.041 ms 00:16:57.959 [2024-09-28 01:29:53.664989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.959 [2024-09-28 01:29:53.687683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.959 [2024-09-28 01:29:53.687712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:57.959 [2024-09-28 01:29:53.687721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.662 ms 00:16:57.959 [2024-09-28 01:29:53.687728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.959 [2024-09-28 01:29:53.709990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.959 [2024-09-28 01:29:53.710019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:57.959 [2024-09-28 01:29:53.710029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.229 ms 00:16:57.959 [2024-09-28 01:29:53.710036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.959 [2024-09-28 01:29:53.732036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.959 [2024-09-28 01:29:53.732065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:57.959 [2024-09-28 01:29:53.732074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.943 ms 00:16:57.959 [2024-09-28 01:29:53.732081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.959 [2024-09-28 01:29:53.732112] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:57.959 [2024-09-28 01:29:53.732126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:57.959 [2024-09-28 01:29:53.732726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:57.960 [2024-09-28 01:29:53.732913] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:57.960 [2024-09-28 01:29:53.732921] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d94572d0-4efc-4bf8-b84f-0a6faae4c084 00:16:57.960 [2024-09-28 01:29:53.732929] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:57.960 [2024-09-28 01:29:53.732936] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:57.960 [2024-09-28 01:29:53.732943] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:57.960 [2024-09-28 01:29:53.732952] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:57.960 [2024-09-28 01:29:53.732960] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:57.960 [2024-09-28 01:29:53.732967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:57.960 [2024-09-28 01:29:53.732974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:57.960 [2024-09-28 01:29:53.732980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:57.960 [2024-09-28 01:29:53.732987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:57.960 [2024-09-28 01:29:53.732994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.960 [2024-09-28 01:29:53.733002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:57.960 [2024-09-28 01:29:53.733009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:16:57.960 [2024-09-28 01:29:53.733017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.960 [2024-09-28 01:29:53.745190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.960 [2024-09-28 01:29:53.745227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:57.960 [2024-09-28 01:29:53.745237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.156 ms 00:16:57.960 [2024-09-28 01:29:53.745245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.960 [2024-09-28 01:29:53.745592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.960 [2024-09-28 01:29:53.745607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:57.960 [2024-09-28 01:29:53.745615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:16:57.960 [2024-09-28 01:29:53.745622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.960 [2024-09-28 01:29:53.775743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.960 [2024-09-28 01:29:53.775866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:57.960 [2024-09-28 01:29:53.775881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.960 [2024-09-28 01:29:53.775889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.960 [2024-09-28 01:29:53.775961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.960 [2024-09-28 01:29:53.775971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:57.960 [2024-09-28 01:29:53.775978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.960 [2024-09-28 01:29:53.775986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.960 [2024-09-28 01:29:53.776024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.960 [2024-09-28 01:29:53.776038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:57.960 [2024-09-28 01:29:53.776045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.960 [2024-09-28 01:29:53.776052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.960 [2024-09-28 01:29:53.776068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.960 [2024-09-28 01:29:53.776075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:57.960 [2024-09-28 01:29:53.776082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.960 [2024-09-28 01:29:53.776089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.960 [2024-09-28 01:29:53.851831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.960 [2024-09-28 01:29:53.851874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:57.960 [2024-09-28 01:29:53.851886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.960 [2024-09-28 01:29:53.851894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.218 [2024-09-28 01:29:53.914812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.218 [2024-09-28 01:29:53.914952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.218 [2024-09-28 01:29:53.914967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.218 [2024-09-28 01:29:53.914975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.218 [2024-09-28 01:29:53.915031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.218 [2024-09-28 01:29:53.915041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.218 [2024-09-28 01:29:53.915053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.218 [2024-09-28 01:29:53.915060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.218 [2024-09-28 01:29:53.915088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.218 [2024-09-28 01:29:53.915096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.218 [2024-09-28 01:29:53.915104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.218 [2024-09-28 01:29:53.915111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.218 [2024-09-28 01:29:53.915227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.218 [2024-09-28 01:29:53.915238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.218 [2024-09-28 01:29:53.915246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.218 [2024-09-28 01:29:53.915256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.218 [2024-09-28 01:29:53.915286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.218 [2024-09-28 01:29:53.915296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:58.218 [2024-09-28 01:29:53.915304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.218 [2024-09-28 01:29:53.915311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.218 [2024-09-28 01:29:53.915345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.218 [2024-09-28 01:29:53.915353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.218 [2024-09-28 01:29:53.915360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.218 [2024-09-28 01:29:53.915370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.218 [2024-09-28 01:29:53.915409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.218 [2024-09-28 01:29:53.915418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.218 [2024-09-28 01:29:53.915425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.218 [2024-09-28 01:29:53.915432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.218 [2024-09-28 01:29:53.915562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.472 ms, result 0 00:16:58.786 00:16:58.786 00:16:58.786 01:29:54 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:59.352 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:16:59.352 01:29:55 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:59.352 01:29:55 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:16:59.352 01:29:55 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:59.352 01:29:55 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:59.352 01:29:55 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:16:59.352 01:29:55 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:59.610 Process with pid 74379 is not found 00:16:59.610 01:29:55 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74379 00:16:59.610 01:29:55 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74379 ']' 00:16:59.610 01:29:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74379 00:16:59.610 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74379) - No such process 00:16:59.610 01:29:55 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74379 is not found' 00:16:59.610 00:16:59.610 real 0m51.449s 00:16:59.610 user 1m13.345s 00:16:59.610 sys 0m5.116s 00:16:59.610 ************************************ 00:16:59.610 END TEST ftl_trim 00:16:59.610 ************************************ 00:16:59.610 01:29:55 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:59.610 01:29:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:59.610 01:29:55 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:16:59.610 01:29:55 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:59.610 01:29:55 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:59.610 01:29:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:59.610 ************************************ 00:16:59.610 START TEST ftl_restore 00:16:59.610 ************************************ 00:16:59.610 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:16:59.610 * Looking for test storage... 00:16:59.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:59.610 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:59.610 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:59.610 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:16:59.610 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.610 01:29:55 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:16:59.611 01:29:55 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:16:59.611 01:29:55 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.611 01:29:55 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:16:59.611 01:29:55 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.611 01:29:55 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.611 01:29:55 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.611 01:29:55 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:59.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.611 --rc genhtml_branch_coverage=1 00:16:59.611 --rc genhtml_function_coverage=1 00:16:59.611 --rc genhtml_legend=1 00:16:59.611 --rc geninfo_all_blocks=1 00:16:59.611 --rc geninfo_unexecuted_blocks=1 00:16:59.611 00:16:59.611 ' 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:59.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.611 --rc genhtml_branch_coverage=1 00:16:59.611 --rc genhtml_function_coverage=1 00:16:59.611 --rc genhtml_legend=1 00:16:59.611 --rc geninfo_all_blocks=1 00:16:59.611 --rc geninfo_unexecuted_blocks=1 00:16:59.611 00:16:59.611 ' 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:59.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.611 --rc genhtml_branch_coverage=1 00:16:59.611 --rc genhtml_function_coverage=1 00:16:59.611 --rc genhtml_legend=1 00:16:59.611 --rc geninfo_all_blocks=1 00:16:59.611 --rc geninfo_unexecuted_blocks=1 00:16:59.611 00:16:59.611 ' 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:59.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.611 --rc genhtml_branch_coverage=1 00:16:59.611 --rc genhtml_function_coverage=1 00:16:59.611 --rc genhtml_legend=1 00:16:59.611 --rc geninfo_all_blocks=1 00:16:59.611 --rc geninfo_unexecuted_blocks=1 00:16:59.611 00:16:59.611 ' 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Hn90G6XsN5 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74585 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74585 00:16:59.611 01:29:55 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74585 ']' 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:59.611 01:29:55 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:16:59.869 [2024-09-28 01:29:55.587672] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:59.869 [2024-09-28 01:29:55.587789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74585 ] 00:16:59.869 [2024-09-28 01:29:55.736224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.127 [2024-09-28 01:29:55.911607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.692 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.692 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:00.692 01:29:56 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:00.693 01:29:56 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:00.693 01:29:56 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:00.693 01:29:56 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:00.693 01:29:56 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:00.693 01:29:56 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:00.951 01:29:56 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:00.951 01:29:56 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:00.951 01:29:56 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:00.951 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:00.951 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:00.951 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:00.951 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:00.951 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:01.209 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:01.209 { 00:17:01.209 "name": "nvme0n1", 00:17:01.209 "aliases": [ 00:17:01.209 "1fb16240-bad0-46dd-96b4-0636c03b5269" 00:17:01.209 ], 00:17:01.209 "product_name": "NVMe disk", 00:17:01.209 "block_size": 4096, 00:17:01.209 "num_blocks": 1310720, 00:17:01.209 "uuid": "1fb16240-bad0-46dd-96b4-0636c03b5269", 00:17:01.209 "numa_id": -1, 00:17:01.209 "assigned_rate_limits": { 00:17:01.209 "rw_ios_per_sec": 0, 00:17:01.209 "rw_mbytes_per_sec": 0, 00:17:01.209 "r_mbytes_per_sec": 0, 00:17:01.209 "w_mbytes_per_sec": 0 00:17:01.209 }, 00:17:01.209 "claimed": true, 00:17:01.209 "claim_type": "read_many_write_one", 00:17:01.209 "zoned": false, 00:17:01.209 "supported_io_types": { 00:17:01.209 "read": true, 00:17:01.209 "write": true, 00:17:01.209 "unmap": true, 00:17:01.209 "flush": true, 00:17:01.209 "reset": true, 00:17:01.209 "nvme_admin": true, 00:17:01.209 "nvme_io": true, 00:17:01.209 "nvme_io_md": false, 00:17:01.209 "write_zeroes": true, 00:17:01.209 "zcopy": false, 00:17:01.209 "get_zone_info": false, 00:17:01.209 "zone_management": false, 00:17:01.209 "zone_append": false, 00:17:01.209 "compare": true, 00:17:01.209 "compare_and_write": false, 00:17:01.209 "abort": true, 00:17:01.209 "seek_hole": false, 00:17:01.209 "seek_data": false, 00:17:01.209 "copy": true, 00:17:01.209 "nvme_iov_md": false 00:17:01.209 }, 00:17:01.209 "driver_specific": { 00:17:01.209 "nvme": [ 00:17:01.209 { 00:17:01.209 "pci_address": "0000:00:11.0", 00:17:01.209 "trid": { 00:17:01.209 "trtype": "PCIe", 00:17:01.209 "traddr": "0000:00:11.0" 00:17:01.209 }, 00:17:01.209 "ctrlr_data": { 00:17:01.209 "cntlid": 0, 00:17:01.209 "vendor_id": "0x1b36", 00:17:01.209 "model_number": "QEMU NVMe Ctrl", 00:17:01.209 "serial_number": "12341", 00:17:01.209 "firmware_revision": "8.0.0", 00:17:01.209 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:01.209 "oacs": { 00:17:01.209 "security": 0, 00:17:01.209 "format": 1, 00:17:01.209 "firmware": 0, 00:17:01.209 "ns_manage": 1 00:17:01.209 }, 00:17:01.209 "multi_ctrlr": false, 00:17:01.209 "ana_reporting": false 00:17:01.209 }, 00:17:01.209 "vs": { 00:17:01.209 "nvme_version": "1.4" 00:17:01.209 }, 00:17:01.209 "ns_data": { 00:17:01.209 "id": 1, 00:17:01.209 "can_share": false 00:17:01.209 } 00:17:01.209 } 00:17:01.209 ], 00:17:01.209 "mp_policy": "active_passive" 00:17:01.209 } 00:17:01.209 } 00:17:01.209 ]' 00:17:01.209 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:01.209 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:01.209 01:29:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:01.209 01:29:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:01.209 01:29:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:01.209 01:29:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:01.209 01:29:57 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:01.209 01:29:57 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:01.209 01:29:57 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:01.209 01:29:57 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:01.209 01:29:57 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:01.468 01:29:57 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=3534499c-7045-4bf4-ac75-be921f7584a0 00:17:01.468 01:29:57 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:01.468 01:29:57 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3534499c-7045-4bf4-ac75-be921f7584a0 00:17:01.727 01:29:57 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:01.727 01:29:57 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=37470b13-3894-4f0c-a98c-4acd423e003c 00:17:01.727 01:29:57 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 37470b13-3894-4f0c-a98c-4acd423e003c 00:17:01.985 01:29:57 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=d31af0a8-048e-4d4d-909b-590c650977fb 00:17:01.985 01:29:57 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:01.985 01:29:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d31af0a8-048e-4d4d-909b-590c650977fb 00:17:01.985 01:29:57 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:01.985 01:29:57 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:01.985 01:29:57 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=d31af0a8-048e-4d4d-909b-590c650977fb 00:17:01.985 01:29:57 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:01.985 01:29:57 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size d31af0a8-048e-4d4d-909b-590c650977fb 00:17:01.985 01:29:57 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=d31af0a8-048e-4d4d-909b-590c650977fb 00:17:01.985 01:29:57 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:01.985 01:29:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:01.985 01:29:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:01.985 01:29:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d31af0a8-048e-4d4d-909b-590c650977fb 00:17:02.243 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:02.243 { 00:17:02.243 "name": "d31af0a8-048e-4d4d-909b-590c650977fb", 00:17:02.243 "aliases": [ 00:17:02.243 "lvs/nvme0n1p0" 00:17:02.243 ], 00:17:02.243 "product_name": "Logical Volume", 00:17:02.243 "block_size": 4096, 00:17:02.243 "num_blocks": 26476544, 00:17:02.243 "uuid": "d31af0a8-048e-4d4d-909b-590c650977fb", 00:17:02.243 "assigned_rate_limits": { 00:17:02.243 "rw_ios_per_sec": 0, 00:17:02.243 "rw_mbytes_per_sec": 0, 00:17:02.243 "r_mbytes_per_sec": 0, 00:17:02.243 "w_mbytes_per_sec": 0 00:17:02.243 }, 00:17:02.243 "claimed": false, 00:17:02.243 "zoned": false, 00:17:02.243 "supported_io_types": { 00:17:02.243 "read": true, 00:17:02.243 "write": true, 00:17:02.243 "unmap": true, 00:17:02.243 "flush": false, 00:17:02.243 "reset": true, 00:17:02.243 "nvme_admin": false, 00:17:02.243 "nvme_io": false, 00:17:02.243 "nvme_io_md": false, 00:17:02.243 "write_zeroes": true, 00:17:02.243 "zcopy": false, 00:17:02.243 "get_zone_info": false, 00:17:02.243 "zone_management": false, 00:17:02.243 "zone_append": false, 00:17:02.243 "compare": false, 00:17:02.243 "compare_and_write": false, 00:17:02.243 "abort": false, 00:17:02.243 "seek_hole": true, 00:17:02.243 "seek_data": true, 00:17:02.243 "copy": false, 00:17:02.243 "nvme_iov_md": false 00:17:02.243 }, 00:17:02.243 "driver_specific": { 00:17:02.243 "lvol": { 00:17:02.243 "lvol_store_uuid": "37470b13-3894-4f0c-a98c-4acd423e003c", 00:17:02.243 "base_bdev": "nvme0n1", 00:17:02.243 "thin_provision": true, 00:17:02.243 "num_allocated_clusters": 0, 00:17:02.243 "snapshot": false, 00:17:02.243 "clone": false, 00:17:02.243 "esnap_clone": false 00:17:02.243 } 00:17:02.243 } 00:17:02.243 } 00:17:02.243 ]' 00:17:02.243 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:02.243 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:02.243 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:02.243 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:02.243 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:02.243 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:02.243 01:29:58 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:02.243 01:29:58 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:02.243 01:29:58 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:02.501 01:29:58 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:02.501 01:29:58 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:02.501 01:29:58 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size d31af0a8-048e-4d4d-909b-590c650977fb 00:17:02.501 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=d31af0a8-048e-4d4d-909b-590c650977fb 00:17:02.501 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:02.501 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:02.501 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:02.501 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d31af0a8-048e-4d4d-909b-590c650977fb 00:17:02.760 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:02.760 { 00:17:02.760 "name": "d31af0a8-048e-4d4d-909b-590c650977fb", 00:17:02.760 "aliases": [ 00:17:02.760 "lvs/nvme0n1p0" 00:17:02.760 ], 00:17:02.760 "product_name": "Logical Volume", 00:17:02.760 "block_size": 4096, 00:17:02.760 "num_blocks": 26476544, 00:17:02.760 "uuid": "d31af0a8-048e-4d4d-909b-590c650977fb", 00:17:02.760 "assigned_rate_limits": { 00:17:02.760 "rw_ios_per_sec": 0, 00:17:02.760 "rw_mbytes_per_sec": 0, 00:17:02.760 "r_mbytes_per_sec": 0, 00:17:02.760 "w_mbytes_per_sec": 0 00:17:02.760 }, 00:17:02.760 "claimed": false, 00:17:02.760 "zoned": false, 00:17:02.760 "supported_io_types": { 00:17:02.760 "read": true, 00:17:02.760 "write": true, 00:17:02.760 "unmap": true, 00:17:02.760 "flush": false, 00:17:02.760 "reset": true, 00:17:02.760 "nvme_admin": false, 00:17:02.760 "nvme_io": false, 00:17:02.760 "nvme_io_md": false, 00:17:02.760 "write_zeroes": true, 00:17:02.760 "zcopy": false, 00:17:02.760 "get_zone_info": false, 00:17:02.760 "zone_management": false, 00:17:02.760 "zone_append": false, 00:17:02.760 "compare": false, 00:17:02.760 "compare_and_write": false, 00:17:02.760 "abort": false, 00:17:02.760 "seek_hole": true, 00:17:02.760 "seek_data": true, 00:17:02.760 "copy": false, 00:17:02.760 "nvme_iov_md": false 00:17:02.760 }, 00:17:02.760 "driver_specific": { 00:17:02.760 "lvol": { 00:17:02.760 "lvol_store_uuid": "37470b13-3894-4f0c-a98c-4acd423e003c", 00:17:02.760 "base_bdev": "nvme0n1", 00:17:02.760 "thin_provision": true, 00:17:02.760 "num_allocated_clusters": 0, 00:17:02.760 "snapshot": false, 00:17:02.760 "clone": false, 00:17:02.760 "esnap_clone": false 00:17:02.760 } 00:17:02.760 } 00:17:02.760 } 00:17:02.760 ]' 00:17:02.760 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:02.760 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:02.760 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:02.760 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:02.760 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:02.760 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:02.760 01:29:58 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:02.760 01:29:58 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:03.018 01:29:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:03.018 01:29:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size d31af0a8-048e-4d4d-909b-590c650977fb 00:17:03.018 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=d31af0a8-048e-4d4d-909b-590c650977fb 00:17:03.018 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:03.018 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:03.018 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:03.018 01:29:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d31af0a8-048e-4d4d-909b-590c650977fb 00:17:03.277 01:29:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:03.277 { 00:17:03.277 "name": "d31af0a8-048e-4d4d-909b-590c650977fb", 00:17:03.277 "aliases": [ 00:17:03.277 "lvs/nvme0n1p0" 00:17:03.277 ], 00:17:03.277 "product_name": "Logical Volume", 00:17:03.277 "block_size": 4096, 00:17:03.277 "num_blocks": 26476544, 00:17:03.277 "uuid": "d31af0a8-048e-4d4d-909b-590c650977fb", 00:17:03.277 "assigned_rate_limits": { 00:17:03.277 "rw_ios_per_sec": 0, 00:17:03.277 "rw_mbytes_per_sec": 0, 00:17:03.277 "r_mbytes_per_sec": 0, 00:17:03.277 "w_mbytes_per_sec": 0 00:17:03.277 }, 00:17:03.277 "claimed": false, 00:17:03.277 "zoned": false, 00:17:03.277 "supported_io_types": { 00:17:03.277 "read": true, 00:17:03.277 "write": true, 00:17:03.277 "unmap": true, 00:17:03.277 "flush": false, 00:17:03.277 "reset": true, 00:17:03.277 "nvme_admin": false, 00:17:03.277 "nvme_io": false, 00:17:03.277 "nvme_io_md": false, 00:17:03.277 "write_zeroes": true, 00:17:03.277 "zcopy": false, 00:17:03.277 "get_zone_info": false, 00:17:03.277 "zone_management": false, 00:17:03.277 "zone_append": false, 00:17:03.277 "compare": false, 00:17:03.277 "compare_and_write": false, 00:17:03.277 "abort": false, 00:17:03.277 "seek_hole": true, 00:17:03.277 "seek_data": true, 00:17:03.277 "copy": false, 00:17:03.277 "nvme_iov_md": false 00:17:03.277 }, 00:17:03.277 "driver_specific": { 00:17:03.277 "lvol": { 00:17:03.277 "lvol_store_uuid": "37470b13-3894-4f0c-a98c-4acd423e003c", 00:17:03.277 "base_bdev": "nvme0n1", 00:17:03.277 "thin_provision": true, 00:17:03.277 "num_allocated_clusters": 0, 00:17:03.277 "snapshot": false, 00:17:03.277 "clone": false, 00:17:03.277 "esnap_clone": false 00:17:03.277 } 00:17:03.277 } 00:17:03.277 } 00:17:03.277 ]' 00:17:03.277 01:29:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:03.277 01:29:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:03.277 01:29:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:03.277 01:29:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:03.277 01:29:59 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:03.277 01:29:59 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:03.277 01:29:59 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:03.277 01:29:59 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d31af0a8-048e-4d4d-909b-590c650977fb --l2p_dram_limit 10' 00:17:03.277 01:29:59 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:03.277 01:29:59 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:03.277 01:29:59 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:03.277 01:29:59 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:03.277 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:03.277 01:29:59 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d31af0a8-048e-4d4d-909b-590c650977fb --l2p_dram_limit 10 -c nvc0n1p0 00:17:03.536 [2024-09-28 01:29:59.288243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.288283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:03.536 [2024-09-28 01:29:59.288296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:03.536 [2024-09-28 01:29:59.288303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.288347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.288355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:03.536 [2024-09-28 01:29:59.288363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:03.536 [2024-09-28 01:29:59.288369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.288389] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:03.536 [2024-09-28 01:29:59.288958] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:03.536 [2024-09-28 01:29:59.288975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.288981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:03.536 [2024-09-28 01:29:59.288989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:17:03.536 [2024-09-28 01:29:59.288996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.289119] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 80515e57-70d8-4c65-ba65-e59d4cf5e2ab 00:17:03.536 [2024-09-28 01:29:59.290090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.290113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:03.536 [2024-09-28 01:29:59.290120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:03.536 [2024-09-28 01:29:59.290127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.294800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.294829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:03.536 [2024-09-28 01:29:59.294836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:17:03.536 [2024-09-28 01:29:59.294843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.294908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.294917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:03.536 [2024-09-28 01:29:59.294924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:03.536 [2024-09-28 01:29:59.294935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.294965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.294973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:03.536 [2024-09-28 01:29:59.294979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:03.536 [2024-09-28 01:29:59.294986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.295003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.536 [2024-09-28 01:29:59.297848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.297871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:03.536 [2024-09-28 01:29:59.297882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.848 ms 00:17:03.536 [2024-09-28 01:29:59.297888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.297914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.297921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:03.536 [2024-09-28 01:29:59.297928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:03.536 [2024-09-28 01:29:59.297936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.297963] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:03.536 [2024-09-28 01:29:59.298066] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:03.536 [2024-09-28 01:29:59.298078] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:03.536 [2024-09-28 01:29:59.298086] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:03.536 [2024-09-28 01:29:59.298097] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:03.536 [2024-09-28 01:29:59.298104] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:03.536 [2024-09-28 01:29:59.298111] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:03.536 [2024-09-28 01:29:59.298117] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:03.536 [2024-09-28 01:29:59.298124] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:03.536 [2024-09-28 01:29:59.298129] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:03.536 [2024-09-28 01:29:59.298137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.298147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:03.536 [2024-09-28 01:29:59.298154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:17:03.536 [2024-09-28 01:29:59.298160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.298238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.536 [2024-09-28 01:29:59.298247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:03.536 [2024-09-28 01:29:59.298254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:03.536 [2024-09-28 01:29:59.298260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.536 [2024-09-28 01:29:59.298334] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:03.536 [2024-09-28 01:29:59.298342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:03.536 [2024-09-28 01:29:59.298349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.536 [2024-09-28 01:29:59.298355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.536 [2024-09-28 01:29:59.298362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:03.536 [2024-09-28 01:29:59.298368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:03.537 [2024-09-28 01:29:59.298379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:03.537 [2024-09-28 01:29:59.298386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.537 [2024-09-28 01:29:59.298397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:03.537 [2024-09-28 01:29:59.298402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:03.537 [2024-09-28 01:29:59.298408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.537 [2024-09-28 01:29:59.298413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:03.537 [2024-09-28 01:29:59.298420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:03.537 [2024-09-28 01:29:59.298425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:03.537 [2024-09-28 01:29:59.298437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:03.537 [2024-09-28 01:29:59.298443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:03.537 [2024-09-28 01:29:59.298456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.537 [2024-09-28 01:29:59.298467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:03.537 [2024-09-28 01:29:59.298472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.537 [2024-09-28 01:29:59.298483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:03.537 [2024-09-28 01:29:59.298489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.537 [2024-09-28 01:29:59.298501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:03.537 [2024-09-28 01:29:59.298506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.537 [2024-09-28 01:29:59.298517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:03.537 [2024-09-28 01:29:59.298525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.537 [2024-09-28 01:29:59.298536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:03.537 [2024-09-28 01:29:59.298542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:03.537 [2024-09-28 01:29:59.298548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.537 [2024-09-28 01:29:59.298553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:03.537 [2024-09-28 01:29:59.298559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:03.537 [2024-09-28 01:29:59.298564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:03.537 [2024-09-28 01:29:59.298577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:03.537 [2024-09-28 01:29:59.298583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298587] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:03.537 [2024-09-28 01:29:59.298595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:03.537 [2024-09-28 01:29:59.298601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.537 [2024-09-28 01:29:59.298609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.537 [2024-09-28 01:29:59.298615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:03.537 [2024-09-28 01:29:59.298623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:03.537 [2024-09-28 01:29:59.298628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:03.537 [2024-09-28 01:29:59.298635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:03.537 [2024-09-28 01:29:59.298639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:03.537 [2024-09-28 01:29:59.298646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:03.537 [2024-09-28 01:29:59.298654] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:03.537 [2024-09-28 01:29:59.298662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.537 [2024-09-28 01:29:59.298668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:03.537 [2024-09-28 01:29:59.298675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:03.537 [2024-09-28 01:29:59.298680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:03.537 [2024-09-28 01:29:59.298686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:03.537 [2024-09-28 01:29:59.298692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:03.537 [2024-09-28 01:29:59.298763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:03.537 [2024-09-28 01:29:59.298768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:03.537 [2024-09-28 01:29:59.298775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:03.537 [2024-09-28 01:29:59.298781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:03.537 [2024-09-28 01:29:59.298789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:03.537 [2024-09-28 01:29:59.298795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:03.537 [2024-09-28 01:29:59.298801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:03.537 [2024-09-28 01:29:59.298808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:03.537 [2024-09-28 01:29:59.298815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:03.537 [2024-09-28 01:29:59.298820] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:03.537 [2024-09-28 01:29:59.298829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.537 [2024-09-28 01:29:59.298835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:03.537 [2024-09-28 01:29:59.298842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:03.537 [2024-09-28 01:29:59.298848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:03.537 [2024-09-28 01:29:59.298854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:03.537 [2024-09-28 01:29:59.298860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.537 [2024-09-28 01:29:59.298867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:03.537 [2024-09-28 01:29:59.298873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:17:03.537 [2024-09-28 01:29:59.298880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.537 [2024-09-28 01:29:59.298922] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:03.537 [2024-09-28 01:29:59.298932] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:05.439 [2024-09-28 01:30:01.230059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.439 [2024-09-28 01:30:01.230120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:05.439 [2024-09-28 01:30:01.230134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1931.128 ms 00:17:05.439 [2024-09-28 01:30:01.230145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.439 [2024-09-28 01:30:01.255178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.439 [2024-09-28 01:30:01.255231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.439 [2024-09-28 01:30:01.255243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.813 ms 00:17:05.439 [2024-09-28 01:30:01.255253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.439 [2024-09-28 01:30:01.255373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.439 [2024-09-28 01:30:01.255385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:05.439 [2024-09-28 01:30:01.255394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:05.439 [2024-09-28 01:30:01.255408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.439 [2024-09-28 01:30:01.292707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.439 [2024-09-28 01:30:01.292751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.439 [2024-09-28 01:30:01.292766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.248 ms 00:17:05.439 [2024-09-28 01:30:01.292777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.439 [2024-09-28 01:30:01.292822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.439 [2024-09-28 01:30:01.292834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.440 [2024-09-28 01:30:01.292842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:05.440 [2024-09-28 01:30:01.292857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.440 [2024-09-28 01:30:01.293189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.440 [2024-09-28 01:30:01.293229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.440 [2024-09-28 01:30:01.293239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:17:05.440 [2024-09-28 01:30:01.293250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.440 [2024-09-28 01:30:01.293367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.440 [2024-09-28 01:30:01.293377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.440 [2024-09-28 01:30:01.293385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:05.440 [2024-09-28 01:30:01.293396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.440 [2024-09-28 01:30:01.308269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.440 [2024-09-28 01:30:01.308440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.440 [2024-09-28 01:30:01.308460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.856 ms 00:17:05.440 [2024-09-28 01:30:01.308474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.440 [2024-09-28 01:30:01.319735] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:05.440 [2024-09-28 01:30:01.322427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.440 [2024-09-28 01:30:01.322455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:05.440 [2024-09-28 01:30:01.322469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.864 ms 00:17:05.440 [2024-09-28 01:30:01.322476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.375449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.375495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:05.736 [2024-09-28 01:30:01.375512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.945 ms 00:17:05.736 [2024-09-28 01:30:01.375520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.375695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.375706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:05.736 [2024-09-28 01:30:01.375718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:17:05.736 [2024-09-28 01:30:01.375725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.398579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.398710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:05.736 [2024-09-28 01:30:01.398730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.809 ms 00:17:05.736 [2024-09-28 01:30:01.398738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.420336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.420366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:05.736 [2024-09-28 01:30:01.420379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.563 ms 00:17:05.736 [2024-09-28 01:30:01.420386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.420948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.420963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:05.736 [2024-09-28 01:30:01.420974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:17:05.736 [2024-09-28 01:30:01.420982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.484748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.484804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:05.736 [2024-09-28 01:30:01.484821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.731 ms 00:17:05.736 [2024-09-28 01:30:01.484832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.509005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.509176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:05.736 [2024-09-28 01:30:01.509218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.100 ms 00:17:05.736 [2024-09-28 01:30:01.509228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.531911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.532032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:05.736 [2024-09-28 01:30:01.532052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.645 ms 00:17:05.736 [2024-09-28 01:30:01.532060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.554847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.554956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:05.736 [2024-09-28 01:30:01.554974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.752 ms 00:17:05.736 [2024-09-28 01:30:01.554982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.555017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.555026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:05.736 [2024-09-28 01:30:01.555041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:05.736 [2024-09-28 01:30:01.555049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.555124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.736 [2024-09-28 01:30:01.555134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:05.736 [2024-09-28 01:30:01.555143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:05.736 [2024-09-28 01:30:01.555151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.736 [2024-09-28 01:30:01.556138] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2267.481 ms, result 0 00:17:05.736 { 00:17:05.736 "name": "ftl0", 00:17:05.736 "uuid": "80515e57-70d8-4c65-ba65-e59d4cf5e2ab" 00:17:05.736 } 00:17:05.736 01:30:01 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:05.736 01:30:01 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:06.002 01:30:01 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:06.002 01:30:01 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:06.267 [2024-09-28 01:30:01.967605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:01.967650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:06.267 [2024-09-28 01:30:01.967663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:06.267 [2024-09-28 01:30:01.967673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:01.967696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.267 [2024-09-28 01:30:01.970307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:01.970335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:06.267 [2024-09-28 01:30:01.970355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:17:06.267 [2024-09-28 01:30:01.970363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:01.970621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:01.970636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:06.267 [2024-09-28 01:30:01.970645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:17:06.267 [2024-09-28 01:30:01.970653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:01.973886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:01.974014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:06.267 [2024-09-28 01:30:01.974031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:17:06.267 [2024-09-28 01:30:01.974041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:01.980284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:01.980385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:06.267 [2024-09-28 01:30:01.980402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.219 ms 00:17:06.267 [2024-09-28 01:30:01.980410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:02.003724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:02.003755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:06.267 [2024-09-28 01:30:02.003767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.245 ms 00:17:06.267 [2024-09-28 01:30:02.003776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:02.017786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:02.017818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:06.267 [2024-09-28 01:30:02.017831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.971 ms 00:17:06.267 [2024-09-28 01:30:02.017839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:02.017982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:02.017995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:06.267 [2024-09-28 01:30:02.018005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:06.267 [2024-09-28 01:30:02.018012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:02.040611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:02.040649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:06.267 [2024-09-28 01:30:02.040661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.580 ms 00:17:06.267 [2024-09-28 01:30:02.040668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:02.062977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:02.063004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:06.267 [2024-09-28 01:30:02.063016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.274 ms 00:17:06.267 [2024-09-28 01:30:02.063023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:02.085103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:02.085132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:06.267 [2024-09-28 01:30:02.085144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.043 ms 00:17:06.267 [2024-09-28 01:30:02.085151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:02.107314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.267 [2024-09-28 01:30:02.107343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:06.267 [2024-09-28 01:30:02.107354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.078 ms 00:17:06.267 [2024-09-28 01:30:02.107361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.267 [2024-09-28 01:30:02.107396] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:06.267 [2024-09-28 01:30:02.107410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:06.267 [2024-09-28 01:30:02.107775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.107997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:06.268 [2024-09-28 01:30:02.108283] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:06.268 [2024-09-28 01:30:02.108294] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 80515e57-70d8-4c65-ba65-e59d4cf5e2ab 00:17:06.268 [2024-09-28 01:30:02.108302] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:06.268 [2024-09-28 01:30:02.108327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:06.268 [2024-09-28 01:30:02.108335] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:06.268 [2024-09-28 01:30:02.108344] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:06.268 [2024-09-28 01:30:02.108350] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:06.268 [2024-09-28 01:30:02.108359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:06.268 [2024-09-28 01:30:02.108368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:06.268 [2024-09-28 01:30:02.108376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:06.268 [2024-09-28 01:30:02.108382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:06.268 [2024-09-28 01:30:02.108391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.268 [2024-09-28 01:30:02.108399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:06.268 [2024-09-28 01:30:02.108409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:17:06.268 [2024-09-28 01:30:02.108422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.268 [2024-09-28 01:30:02.120497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.268 [2024-09-28 01:30:02.120526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:06.268 [2024-09-28 01:30:02.120537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.044 ms 00:17:06.268 [2024-09-28 01:30:02.120544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.268 [2024-09-28 01:30:02.120912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.268 [2024-09-28 01:30:02.120927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:06.268 [2024-09-28 01:30:02.120937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:17:06.268 [2024-09-28 01:30:02.120945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.268 [2024-09-28 01:30:02.157808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.268 [2024-09-28 01:30:02.157839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.268 [2024-09-28 01:30:02.157851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.268 [2024-09-28 01:30:02.157861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.268 [2024-09-28 01:30:02.157914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.268 [2024-09-28 01:30:02.157922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.268 [2024-09-28 01:30:02.157932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.268 [2024-09-28 01:30:02.157939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.268 [2024-09-28 01:30:02.157999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.268 [2024-09-28 01:30:02.158008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.268 [2024-09-28 01:30:02.158018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.268 [2024-09-28 01:30:02.158025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.268 [2024-09-28 01:30:02.158047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.268 [2024-09-28 01:30:02.158055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.268 [2024-09-28 01:30:02.158063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.268 [2024-09-28 01:30:02.158071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.526 [2024-09-28 01:30:02.232978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.526 [2024-09-28 01:30:02.233022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.526 [2024-09-28 01:30:02.233036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.526 [2024-09-28 01:30:02.233043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.526 [2024-09-28 01:30:02.294549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.526 [2024-09-28 01:30:02.294593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.526 [2024-09-28 01:30:02.294606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.526 [2024-09-28 01:30:02.294614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.526 [2024-09-28 01:30:02.294697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.526 [2024-09-28 01:30:02.294706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.526 [2024-09-28 01:30:02.294716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.526 [2024-09-28 01:30:02.294723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.526 [2024-09-28 01:30:02.294770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.526 [2024-09-28 01:30:02.294781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.526 [2024-09-28 01:30:02.294791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.526 [2024-09-28 01:30:02.294798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.526 [2024-09-28 01:30:02.294887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.526 [2024-09-28 01:30:02.294896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.526 [2024-09-28 01:30:02.294906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.526 [2024-09-28 01:30:02.294913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.526 [2024-09-28 01:30:02.294944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.526 [2024-09-28 01:30:02.294953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:06.526 [2024-09-28 01:30:02.294965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.526 [2024-09-28 01:30:02.294973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.526 [2024-09-28 01:30:02.295008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.526 [2024-09-28 01:30:02.295017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.526 [2024-09-28 01:30:02.295026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.526 [2024-09-28 01:30:02.295034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.526 [2024-09-28 01:30:02.295077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.526 [2024-09-28 01:30:02.295088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.526 [2024-09-28 01:30:02.295097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.526 [2024-09-28 01:30:02.295105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.526 [2024-09-28 01:30:02.295251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 327.590 ms, result 0 00:17:06.526 true 00:17:06.526 01:30:02 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74585 00:17:06.526 01:30:02 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74585 ']' 00:17:06.526 01:30:02 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74585 00:17:06.526 01:30:02 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:06.526 01:30:02 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:06.526 01:30:02 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74585 00:17:06.526 killing process with pid 74585 00:17:06.526 01:30:02 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:06.526 01:30:02 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:06.526 01:30:02 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74585' 00:17:06.526 01:30:02 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74585 00:17:06.526 01:30:02 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74585 00:17:13.084 01:30:08 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:17.267 262144+0 records in 00:17:17.267 262144+0 records out 00:17:17.267 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.76138 s, 285 MB/s 00:17:17.267 01:30:12 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:18.642 01:30:14 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:18.642 [2024-09-28 01:30:14.423777] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:18.642 [2024-09-28 01:30:14.423991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74799 ] 00:17:18.642 [2024-09-28 01:30:14.569238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.901 [2024-09-28 01:30:14.745637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.160 [2024-09-28 01:30:14.994515] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.160 [2024-09-28 01:30:14.994575] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.421 [2024-09-28 01:30:15.147820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.147867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:19.421 [2024-09-28 01:30:15.147880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:19.421 [2024-09-28 01:30:15.147891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.147932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.147941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.421 [2024-09-28 01:30:15.147949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:19.421 [2024-09-28 01:30:15.147956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.147972] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:19.421 [2024-09-28 01:30:15.148700] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:19.421 [2024-09-28 01:30:15.148720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.148728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.421 [2024-09-28 01:30:15.148736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:17:19.421 [2024-09-28 01:30:15.148743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.149803] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:19.421 [2024-09-28 01:30:15.161921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.161956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:19.421 [2024-09-28 01:30:15.161967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.119 ms 00:17:19.421 [2024-09-28 01:30:15.161976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.162029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.162039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:19.421 [2024-09-28 01:30:15.162048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:19.421 [2024-09-28 01:30:15.162055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.166995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.167026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.421 [2024-09-28 01:30:15.167035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.879 ms 00:17:19.421 [2024-09-28 01:30:15.167043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.167114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.167124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.421 [2024-09-28 01:30:15.167132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:19.421 [2024-09-28 01:30:15.167139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.167178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.167187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:19.421 [2024-09-28 01:30:15.167212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:19.421 [2024-09-28 01:30:15.167219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.167240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.421 [2024-09-28 01:30:15.170480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.170506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.421 [2024-09-28 01:30:15.170515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.245 ms 00:17:19.421 [2024-09-28 01:30:15.170522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.170550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.170558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:19.421 [2024-09-28 01:30:15.170565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:19.421 [2024-09-28 01:30:15.170575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.170594] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:19.421 [2024-09-28 01:30:15.170612] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:19.421 [2024-09-28 01:30:15.170645] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:19.421 [2024-09-28 01:30:15.170661] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:19.421 [2024-09-28 01:30:15.170761] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:19.421 [2024-09-28 01:30:15.170771] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:19.421 [2024-09-28 01:30:15.170784] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:19.421 [2024-09-28 01:30:15.170793] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:19.421 [2024-09-28 01:30:15.170801] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:19.421 [2024-09-28 01:30:15.170809] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:19.421 [2024-09-28 01:30:15.170816] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:19.421 [2024-09-28 01:30:15.170823] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:19.421 [2024-09-28 01:30:15.170830] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:19.421 [2024-09-28 01:30:15.170838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.170845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:19.421 [2024-09-28 01:30:15.170852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:17:19.421 [2024-09-28 01:30:15.170859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.170943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.421 [2024-09-28 01:30:15.170951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:19.421 [2024-09-28 01:30:15.170958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:19.421 [2024-09-28 01:30:15.170964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.421 [2024-09-28 01:30:15.171065] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:19.421 [2024-09-28 01:30:15.171075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:19.421 [2024-09-28 01:30:15.171083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.421 [2024-09-28 01:30:15.171090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.421 [2024-09-28 01:30:15.171097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:19.421 [2024-09-28 01:30:15.171104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:19.421 [2024-09-28 01:30:15.171111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:19.421 [2024-09-28 01:30:15.171118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:19.421 [2024-09-28 01:30:15.171125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:19.421 [2024-09-28 01:30:15.171132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.421 [2024-09-28 01:30:15.171139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:19.421 [2024-09-28 01:30:15.171145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:19.421 [2024-09-28 01:30:15.171152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.421 [2024-09-28 01:30:15.171163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:19.421 [2024-09-28 01:30:15.171170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:19.421 [2024-09-28 01:30:15.171177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.421 [2024-09-28 01:30:15.171183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:19.421 [2024-09-28 01:30:15.171190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:19.421 [2024-09-28 01:30:15.171379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.421 [2024-09-28 01:30:15.171400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:19.421 [2024-09-28 01:30:15.171419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:19.421 [2024-09-28 01:30:15.171481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.421 [2024-09-28 01:30:15.171504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:19.421 [2024-09-28 01:30:15.171522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:19.421 [2024-09-28 01:30:15.171539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.421 [2024-09-28 01:30:15.171558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:19.421 [2024-09-28 01:30:15.171622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:19.421 [2024-09-28 01:30:15.171641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.421 [2024-09-28 01:30:15.171658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:19.421 [2024-09-28 01:30:15.171676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:19.421 [2024-09-28 01:30:15.171724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.421 [2024-09-28 01:30:15.171747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:19.421 [2024-09-28 01:30:15.171837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:19.421 [2024-09-28 01:30:15.171860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.421 [2024-09-28 01:30:15.171910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:19.421 [2024-09-28 01:30:15.171931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:19.422 [2024-09-28 01:30:15.171951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.422 [2024-09-28 01:30:15.171996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:19.422 [2024-09-28 01:30:15.172043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:19.422 [2024-09-28 01:30:15.172104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.422 [2024-09-28 01:30:15.172125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:19.422 [2024-09-28 01:30:15.172143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:19.422 [2024-09-28 01:30:15.172255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.422 [2024-09-28 01:30:15.172266] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:19.422 [2024-09-28 01:30:15.172279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:19.422 [2024-09-28 01:30:15.172286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.422 [2024-09-28 01:30:15.172294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.422 [2024-09-28 01:30:15.172302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:19.422 [2024-09-28 01:30:15.172309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:19.422 [2024-09-28 01:30:15.172315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:19.422 [2024-09-28 01:30:15.172322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:19.422 [2024-09-28 01:30:15.172329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:19.422 [2024-09-28 01:30:15.172335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:19.422 [2024-09-28 01:30:15.172343] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:19.422 [2024-09-28 01:30:15.172352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.422 [2024-09-28 01:30:15.172361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:19.422 [2024-09-28 01:30:15.172367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:19.422 [2024-09-28 01:30:15.172375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:19.422 [2024-09-28 01:30:15.172382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:19.422 [2024-09-28 01:30:15.172389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:19.422 [2024-09-28 01:30:15.172396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:19.422 [2024-09-28 01:30:15.172403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:19.422 [2024-09-28 01:30:15.172409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:19.422 [2024-09-28 01:30:15.172416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:19.422 [2024-09-28 01:30:15.172423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:19.422 [2024-09-28 01:30:15.172430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:19.422 [2024-09-28 01:30:15.172437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:19.422 [2024-09-28 01:30:15.172444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:19.422 [2024-09-28 01:30:15.172451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:19.422 [2024-09-28 01:30:15.172458] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:19.422 [2024-09-28 01:30:15.172465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.422 [2024-09-28 01:30:15.172474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:19.422 [2024-09-28 01:30:15.172481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:19.422 [2024-09-28 01:30:15.172489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:19.422 [2024-09-28 01:30:15.172495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:19.422 [2024-09-28 01:30:15.172503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.172510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:19.422 [2024-09-28 01:30:15.172518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.506 ms 00:17:19.422 [2024-09-28 01:30:15.172525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.214067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.214109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.422 [2024-09-28 01:30:15.214121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.469 ms 00:17:19.422 [2024-09-28 01:30:15.214132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.214236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.214247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:19.422 [2024-09-28 01:30:15.214255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:19.422 [2024-09-28 01:30:15.214262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.244285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.244330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.422 [2024-09-28 01:30:15.244341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.965 ms 00:17:19.422 [2024-09-28 01:30:15.244348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.244379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.244387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.422 [2024-09-28 01:30:15.244395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:19.422 [2024-09-28 01:30:15.244402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.244743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.244758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.422 [2024-09-28 01:30:15.244770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:17:19.422 [2024-09-28 01:30:15.244777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.244916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.244926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.422 [2024-09-28 01:30:15.244934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:19.422 [2024-09-28 01:30:15.244941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.257144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.257172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.422 [2024-09-28 01:30:15.257181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.184 ms 00:17:19.422 [2024-09-28 01:30:15.257188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.269379] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:19.422 [2024-09-28 01:30:15.269502] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:19.422 [2024-09-28 01:30:15.269516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.269524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:19.422 [2024-09-28 01:30:15.269533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.218 ms 00:17:19.422 [2024-09-28 01:30:15.269540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.293363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.293394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:19.422 [2024-09-28 01:30:15.293405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.792 ms 00:17:19.422 [2024-09-28 01:30:15.293412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.304724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.304843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:19.422 [2024-09-28 01:30:15.304859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.276 ms 00:17:19.422 [2024-09-28 01:30:15.304866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.315826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.315928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:19.422 [2024-09-28 01:30:15.315942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.933 ms 00:17:19.422 [2024-09-28 01:30:15.315949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.422 [2024-09-28 01:30:15.316547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.422 [2024-09-28 01:30:15.316561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:19.422 [2024-09-28 01:30:15.316570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:17:19.422 [2024-09-28 01:30:15.316578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.681 [2024-09-28 01:30:15.370133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.681 [2024-09-28 01:30:15.370176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:19.681 [2024-09-28 01:30:15.370189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.537 ms 00:17:19.681 [2024-09-28 01:30:15.370211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.681 [2024-09-28 01:30:15.380505] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:19.681 [2024-09-28 01:30:15.382706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.681 [2024-09-28 01:30:15.382733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:19.681 [2024-09-28 01:30:15.382744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.449 ms 00:17:19.681 [2024-09-28 01:30:15.382756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.681 [2024-09-28 01:30:15.382840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.681 [2024-09-28 01:30:15.382851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:19.681 [2024-09-28 01:30:15.382861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:19.681 [2024-09-28 01:30:15.382869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.681 [2024-09-28 01:30:15.382932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.681 [2024-09-28 01:30:15.382946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:19.681 [2024-09-28 01:30:15.382954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:19.681 [2024-09-28 01:30:15.382961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.681 [2024-09-28 01:30:15.382983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.681 [2024-09-28 01:30:15.382991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:19.681 [2024-09-28 01:30:15.382998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:19.681 [2024-09-28 01:30:15.383006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.681 [2024-09-28 01:30:15.383034] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:19.681 [2024-09-28 01:30:15.383044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.681 [2024-09-28 01:30:15.383051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:19.681 [2024-09-28 01:30:15.383061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:19.681 [2024-09-28 01:30:15.383068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.681 [2024-09-28 01:30:15.406018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.681 [2024-09-28 01:30:15.406048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:19.681 [2024-09-28 01:30:15.406059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.933 ms 00:17:19.681 [2024-09-28 01:30:15.406067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.681 [2024-09-28 01:30:15.406135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.681 [2024-09-28 01:30:15.406145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:19.681 [2024-09-28 01:30:15.406153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:19.681 [2024-09-28 01:30:15.406160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.682 [2024-09-28 01:30:15.407366] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.117 ms, result 0 00:17:41.332  Copying: 46/1024 [MB] (46 MBps) Copying: 92/1024 [MB] (46 MBps) Copying: 138/1024 [MB] (45 MBps) Copying: 182/1024 [MB] (44 MBps) Copying: 229/1024 [MB] (46 MBps) Copying: 279/1024 [MB] (50 MBps) Copying: 329/1024 [MB] (50 MBps) Copying: 377/1024 [MB] (47 MBps) Copying: 421/1024 [MB] (44 MBps) Copying: 466/1024 [MB] (44 MBps) Copying: 512/1024 [MB] (46 MBps) Copying: 563/1024 [MB] (50 MBps) Copying: 611/1024 [MB] (47 MBps) Copying: 658/1024 [MB] (47 MBps) Copying: 705/1024 [MB] (47 MBps) Copying: 754/1024 [MB] (48 MBps) Copying: 801/1024 [MB] (46 MBps) Copying: 848/1024 [MB] (46 MBps) Copying: 895/1024 [MB] (47 MBps) Copying: 942/1024 [MB] (46 MBps) Copying: 989/1024 [MB] (47 MBps) Copying: 1024/1024 [MB] (average 47 MBps)[2024-09-28 01:30:37.166829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.332 [2024-09-28 01:30:37.166875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:41.332 [2024-09-28 01:30:37.166888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:41.332 [2024-09-28 01:30:37.166900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.332 [2024-09-28 01:30:37.166921] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.332 [2024-09-28 01:30:37.169589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.332 [2024-09-28 01:30:37.169631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:41.332 [2024-09-28 01:30:37.169646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.654 ms 00:17:41.332 [2024-09-28 01:30:37.169660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.332 [2024-09-28 01:30:37.170976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.332 [2024-09-28 01:30:37.171013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:41.332 [2024-09-28 01:30:37.171027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:17:41.332 [2024-09-28 01:30:37.171039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.332 [2024-09-28 01:30:37.184381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.332 [2024-09-28 01:30:37.184417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:41.332 [2024-09-28 01:30:37.184432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.315 ms 00:17:41.332 [2024-09-28 01:30:37.184445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.332 [2024-09-28 01:30:37.190704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.332 [2024-09-28 01:30:37.190740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:41.332 [2024-09-28 01:30:37.190755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:17:41.332 [2024-09-28 01:30:37.190767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.332 [2024-09-28 01:30:37.214365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.332 [2024-09-28 01:30:37.214401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:41.332 [2024-09-28 01:30:37.214417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.530 ms 00:17:41.332 [2024-09-28 01:30:37.214428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.332 [2024-09-28 01:30:37.228579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.332 [2024-09-28 01:30:37.228620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:41.332 [2024-09-28 01:30:37.228637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.079 ms 00:17:41.332 [2024-09-28 01:30:37.228649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.332 [2024-09-28 01:30:37.228838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.332 [2024-09-28 01:30:37.228861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:41.332 [2024-09-28 01:30:37.228874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:41.332 [2024-09-28 01:30:37.228887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.332 [2024-09-28 01:30:37.252376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.332 [2024-09-28 01:30:37.252412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:41.332 [2024-09-28 01:30:37.252427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.470 ms 00:17:41.332 [2024-09-28 01:30:37.252439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.592 [2024-09-28 01:30:37.275275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.592 [2024-09-28 01:30:37.275309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:41.592 [2024-09-28 01:30:37.275323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.772 ms 00:17:41.592 [2024-09-28 01:30:37.275335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.592 [2024-09-28 01:30:37.297964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.592 [2024-09-28 01:30:37.297999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:41.592 [2024-09-28 01:30:37.298013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.563 ms 00:17:41.592 [2024-09-28 01:30:37.298024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.592 [2024-09-28 01:30:37.320557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.592 [2024-09-28 01:30:37.320590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:41.592 [2024-09-28 01:30:37.320605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.435 ms 00:17:41.592 [2024-09-28 01:30:37.320616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.592 [2024-09-28 01:30:37.320680] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:41.592 [2024-09-28 01:30:37.320702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:41.592 [2024-09-28 01:30:37.320887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.320896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.320907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.320919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.320931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.320942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.320952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.320963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.320976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.320989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.321987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:41.593 [2024-09-28 01:30:37.322011] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:41.593 [2024-09-28 01:30:37.322031] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 80515e57-70d8-4c65-ba65-e59d4cf5e2ab 00:17:41.593 [2024-09-28 01:30:37.322044] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:41.593 [2024-09-28 01:30:37.322056] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:41.594 [2024-09-28 01:30:37.322067] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:41.594 [2024-09-28 01:30:37.322079] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:41.594 [2024-09-28 01:30:37.322091] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:41.594 [2024-09-28 01:30:37.322109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:41.594 [2024-09-28 01:30:37.322121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:41.594 [2024-09-28 01:30:37.322133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:41.594 [2024-09-28 01:30:37.322144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:41.594 [2024-09-28 01:30:37.322157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.594 [2024-09-28 01:30:37.322170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:41.594 [2024-09-28 01:30:37.322202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:17:41.594 [2024-09-28 01:30:37.322220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.334446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.594 [2024-09-28 01:30:37.334480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:41.594 [2024-09-28 01:30:37.334495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.201 ms 00:17:41.594 [2024-09-28 01:30:37.334512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.334878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.594 [2024-09-28 01:30:37.334905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:41.594 [2024-09-28 01:30:37.334918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:17:41.594 [2024-09-28 01:30:37.334931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.362481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.362522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.594 [2024-09-28 01:30:37.362537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.362555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.362627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.362640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.594 [2024-09-28 01:30:37.362653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.362664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.362737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.362763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.594 [2024-09-28 01:30:37.362776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.362787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.362814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.362826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.594 [2024-09-28 01:30:37.362840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.362852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.437678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.437728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.594 [2024-09-28 01:30:37.437744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.437760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.499520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.499574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.594 [2024-09-28 01:30:37.499590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.499601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.499687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.499700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.594 [2024-09-28 01:30:37.499712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.499723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.499772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.499805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.594 [2024-09-28 01:30:37.499818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.499830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.500061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.500085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.594 [2024-09-28 01:30:37.500099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.500110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.500164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.500183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:41.594 [2024-09-28 01:30:37.500218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.500235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.500281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.500301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.594 [2024-09-28 01:30:37.500313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.500325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.500379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.594 [2024-09-28 01:30:37.500404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.594 [2024-09-28 01:30:37.500416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.594 [2024-09-28 01:30:37.500427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.594 [2024-09-28 01:30:37.500572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.698 ms, result 0 00:17:44.126 00:17:44.126 00:17:44.126 01:30:39 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:17:44.126 [2024-09-28 01:30:39.522585] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:44.126 [2024-09-28 01:30:39.522872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75050 ] 00:17:44.126 [2024-09-28 01:30:39.672869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.126 [2024-09-28 01:30:39.849427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.385 [2024-09-28 01:30:40.097480] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.385 [2024-09-28 01:30:40.097545] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.385 [2024-09-28 01:30:40.250389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.250436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.385 [2024-09-28 01:30:40.250448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.385 [2024-09-28 01:30:40.250461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.250502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.250512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.385 [2024-09-28 01:30:40.250520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:44.385 [2024-09-28 01:30:40.250527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.250542] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.385 [2024-09-28 01:30:40.251281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.385 [2024-09-28 01:30:40.251306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.251314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.385 [2024-09-28 01:30:40.251323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:17:44.385 [2024-09-28 01:30:40.251330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.252334] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:44.385 [2024-09-28 01:30:40.264618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.264651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:44.385 [2024-09-28 01:30:40.264662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.286 ms 00:17:44.385 [2024-09-28 01:30:40.264669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.264716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.264725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:44.385 [2024-09-28 01:30:40.264733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:44.385 [2024-09-28 01:30:40.264741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.269281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.269310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.385 [2024-09-28 01:30:40.269320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.486 ms 00:17:44.385 [2024-09-28 01:30:40.269327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.269392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.269401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.385 [2024-09-28 01:30:40.269409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:44.385 [2024-09-28 01:30:40.269416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.269463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.269479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.385 [2024-09-28 01:30:40.269488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:44.385 [2024-09-28 01:30:40.269495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.269515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.385 [2024-09-28 01:30:40.272813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.272840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.385 [2024-09-28 01:30:40.272849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:17:44.385 [2024-09-28 01:30:40.272856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.272882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.272892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.385 [2024-09-28 01:30:40.272900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:44.385 [2024-09-28 01:30:40.272907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.272927] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:44.385 [2024-09-28 01:30:40.272945] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:44.385 [2024-09-28 01:30:40.272989] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:44.385 [2024-09-28 01:30:40.273005] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:44.385 [2024-09-28 01:30:40.273112] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.385 [2024-09-28 01:30:40.273131] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.385 [2024-09-28 01:30:40.273142] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.385 [2024-09-28 01:30:40.273155] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.385 [2024-09-28 01:30:40.273164] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.385 [2024-09-28 01:30:40.273172] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:44.385 [2024-09-28 01:30:40.273179] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.385 [2024-09-28 01:30:40.273186] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.385 [2024-09-28 01:30:40.273205] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.385 [2024-09-28 01:30:40.273218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.273229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.385 [2024-09-28 01:30:40.273237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:17:44.385 [2024-09-28 01:30:40.273244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.273332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.385 [2024-09-28 01:30:40.273344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.385 [2024-09-28 01:30:40.273351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:44.385 [2024-09-28 01:30:40.273359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.385 [2024-09-28 01:30:40.273478] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.385 [2024-09-28 01:30:40.273496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.385 [2024-09-28 01:30:40.273506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.386 [2024-09-28 01:30:40.273514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.386 [2024-09-28 01:30:40.273529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:44.386 [2024-09-28 01:30:40.273543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.386 [2024-09-28 01:30:40.273549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.386 [2024-09-28 01:30:40.273562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.386 [2024-09-28 01:30:40.273569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:44.386 [2024-09-28 01:30:40.273575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.386 [2024-09-28 01:30:40.273590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.386 [2024-09-28 01:30:40.273602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:44.386 [2024-09-28 01:30:40.273612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.386 [2024-09-28 01:30:40.273626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:44.386 [2024-09-28 01:30:40.273632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.386 [2024-09-28 01:30:40.273645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.386 [2024-09-28 01:30:40.273658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.386 [2024-09-28 01:30:40.273664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.386 [2024-09-28 01:30:40.273676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.386 [2024-09-28 01:30:40.273687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.386 [2024-09-28 01:30:40.273709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.386 [2024-09-28 01:30:40.273719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.386 [2024-09-28 01:30:40.273732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.386 [2024-09-28 01:30:40.273739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.386 [2024-09-28 01:30:40.273752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.386 [2024-09-28 01:30:40.273758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:44.386 [2024-09-28 01:30:40.273765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.386 [2024-09-28 01:30:40.273771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.386 [2024-09-28 01:30:40.273778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:44.386 [2024-09-28 01:30:40.273784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.386 [2024-09-28 01:30:40.273797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:44.386 [2024-09-28 01:30:40.273804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273810] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.386 [2024-09-28 01:30:40.273817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.386 [2024-09-28 01:30:40.273826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.386 [2024-09-28 01:30:40.273836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.386 [2024-09-28 01:30:40.273849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.386 [2024-09-28 01:30:40.273861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.386 [2024-09-28 01:30:40.273872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.386 [2024-09-28 01:30:40.273879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.386 [2024-09-28 01:30:40.273886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.386 [2024-09-28 01:30:40.273892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.386 [2024-09-28 01:30:40.273900] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.386 [2024-09-28 01:30:40.273908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.386 [2024-09-28 01:30:40.273917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:44.386 [2024-09-28 01:30:40.273924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:44.386 [2024-09-28 01:30:40.273930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:44.386 [2024-09-28 01:30:40.273937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:44.386 [2024-09-28 01:30:40.273946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:44.386 [2024-09-28 01:30:40.273958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:44.386 [2024-09-28 01:30:40.273967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:44.386 [2024-09-28 01:30:40.273974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:44.386 [2024-09-28 01:30:40.273981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:44.386 [2024-09-28 01:30:40.273988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:44.386 [2024-09-28 01:30:40.273995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:44.386 [2024-09-28 01:30:40.274002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:44.386 [2024-09-28 01:30:40.274009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:44.386 [2024-09-28 01:30:40.274016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:44.386 [2024-09-28 01:30:40.274023] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.386 [2024-09-28 01:30:40.274032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.386 [2024-09-28 01:30:40.274040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.386 [2024-09-28 01:30:40.274047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.386 [2024-09-28 01:30:40.274054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.386 [2024-09-28 01:30:40.274061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.386 [2024-09-28 01:30:40.274069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.386 [2024-09-28 01:30:40.274076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.386 [2024-09-28 01:30:40.274084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:17:44.386 [2024-09-28 01:30:40.274095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.386 [2024-09-28 01:30:40.308613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.386 [2024-09-28 01:30:40.308664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.386 [2024-09-28 01:30:40.308679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.457 ms 00:17:44.386 [2024-09-28 01:30:40.308690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.386 [2024-09-28 01:30:40.308815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.386 [2024-09-28 01:30:40.308827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.386 [2024-09-28 01:30:40.308838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:44.386 [2024-09-28 01:30:40.308847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.645 [2024-09-28 01:30:40.339115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.645 [2024-09-28 01:30:40.339151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.645 [2024-09-28 01:30:40.339165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.198 ms 00:17:44.645 [2024-09-28 01:30:40.339172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.645 [2024-09-28 01:30:40.339215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.645 [2024-09-28 01:30:40.339224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.645 [2024-09-28 01:30:40.339233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:44.645 [2024-09-28 01:30:40.339240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.645 [2024-09-28 01:30:40.339591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.645 [2024-09-28 01:30:40.339618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.645 [2024-09-28 01:30:40.339628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:44.645 [2024-09-28 01:30:40.339639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.645 [2024-09-28 01:30:40.339758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.645 [2024-09-28 01:30:40.339766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.645 [2024-09-28 01:30:40.339774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:44.645 [2024-09-28 01:30:40.339781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.645 [2024-09-28 01:30:40.351936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.645 [2024-09-28 01:30:40.351967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.646 [2024-09-28 01:30:40.351977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.136 ms 00:17:44.646 [2024-09-28 01:30:40.351985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.364189] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:44.646 [2024-09-28 01:30:40.364229] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.646 [2024-09-28 01:30:40.364241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.364249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.646 [2024-09-28 01:30:40.364257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.143 ms 00:17:44.646 [2024-09-28 01:30:40.364264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.388169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.388210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.646 [2024-09-28 01:30:40.388221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.868 ms 00:17:44.646 [2024-09-28 01:30:40.388228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.399834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.399864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.646 [2024-09-28 01:30:40.399874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.569 ms 00:17:44.646 [2024-09-28 01:30:40.399881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.411402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.411432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.646 [2024-09-28 01:30:40.411441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.483 ms 00:17:44.646 [2024-09-28 01:30:40.411448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.412057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.412084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.646 [2024-09-28 01:30:40.412093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:17:44.646 [2024-09-28 01:30:40.412101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.465501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.465551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.646 [2024-09-28 01:30:40.465563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.383 ms 00:17:44.646 [2024-09-28 01:30:40.465570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.475835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:44.646 [2024-09-28 01:30:40.478190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.478228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.646 [2024-09-28 01:30:40.478240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.569 ms 00:17:44.646 [2024-09-28 01:30:40.478253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.478342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.478356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.646 [2024-09-28 01:30:40.478369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:44.646 [2024-09-28 01:30:40.478381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.478458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.478470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.646 [2024-09-28 01:30:40.478479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:44.646 [2024-09-28 01:30:40.478486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.478508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.478516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.646 [2024-09-28 01:30:40.478524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.646 [2024-09-28 01:30:40.478531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.478560] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.646 [2024-09-28 01:30:40.478570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.478577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.646 [2024-09-28 01:30:40.478587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:44.646 [2024-09-28 01:30:40.478594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.502127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.502160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.646 [2024-09-28 01:30:40.502170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.517 ms 00:17:44.646 [2024-09-28 01:30:40.502178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.502252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.646 [2024-09-28 01:30:40.502263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.646 [2024-09-28 01:30:40.502271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:44.646 [2024-09-28 01:30:40.502278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.646 [2024-09-28 01:30:40.503128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 252.342 ms, result 0 00:18:06.303  Copying: 48/1024 [MB] (48 MBps) Copying: 96/1024 [MB] (47 MBps) Copying: 141/1024 [MB] (45 MBps) Copying: 189/1024 [MB] (47 MBps) Copying: 237/1024 [MB] (48 MBps) Copying: 287/1024 [MB] (49 MBps) Copying: 336/1024 [MB] (48 MBps) Copying: 385/1024 [MB] (49 MBps) Copying: 433/1024 [MB] (47 MBps) Copying: 481/1024 [MB] (48 MBps) Copying: 533/1024 [MB] (51 MBps) Copying: 581/1024 [MB] (48 MBps) Copying: 628/1024 [MB] (46 MBps) Copying: 679/1024 [MB] (50 MBps) Copying: 725/1024 [MB] (46 MBps) Copying: 772/1024 [MB] (47 MBps) Copying: 821/1024 [MB] (48 MBps) Copying: 868/1024 [MB] (47 MBps) Copying: 914/1024 [MB] (45 MBps) Copying: 960/1024 [MB] (46 MBps) Copying: 1009/1024 [MB] (48 MBps) Copying: 1024/1024 [MB] (average 48 MBps)[2024-09-28 01:31:02.164260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.303 [2024-09-28 01:31:02.164305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:06.303 [2024-09-28 01:31:02.164319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:06.303 [2024-09-28 01:31:02.164332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.303 [2024-09-28 01:31:02.164353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:06.303 [2024-09-28 01:31:02.167877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.303 [2024-09-28 01:31:02.167920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:06.303 [2024-09-28 01:31:02.167935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.508 ms 00:18:06.303 [2024-09-28 01:31:02.167947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.303 [2024-09-28 01:31:02.168289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.303 [2024-09-28 01:31:02.168317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:06.303 [2024-09-28 01:31:02.168330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:06.303 [2024-09-28 01:31:02.168342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.303 [2024-09-28 01:31:02.175541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.303 [2024-09-28 01:31:02.175582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:06.303 [2024-09-28 01:31:02.175597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.173 ms 00:18:06.303 [2024-09-28 01:31:02.175608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.303 [2024-09-28 01:31:02.182296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.303 [2024-09-28 01:31:02.182323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:06.303 [2024-09-28 01:31:02.182332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.662 ms 00:18:06.303 [2024-09-28 01:31:02.182340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.303 [2024-09-28 01:31:02.205553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.303 [2024-09-28 01:31:02.205586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:06.303 [2024-09-28 01:31:02.205597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.154 ms 00:18:06.303 [2024-09-28 01:31:02.205604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.303 [2024-09-28 01:31:02.218813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.303 [2024-09-28 01:31:02.218849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:06.303 [2024-09-28 01:31:02.218861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.176 ms 00:18:06.303 [2024-09-28 01:31:02.218868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.303 [2024-09-28 01:31:02.218990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.303 [2024-09-28 01:31:02.219000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:06.303 [2024-09-28 01:31:02.219009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:06.303 [2024-09-28 01:31:02.219016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.563 [2024-09-28 01:31:02.241654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.563 [2024-09-28 01:31:02.241687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:06.563 [2024-09-28 01:31:02.241697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.624 ms 00:18:06.563 [2024-09-28 01:31:02.241705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.563 [2024-09-28 01:31:02.264022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.563 [2024-09-28 01:31:02.264058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:06.563 [2024-09-28 01:31:02.264068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.285 ms 00:18:06.563 [2024-09-28 01:31:02.264076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.563 [2024-09-28 01:31:02.286695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.563 [2024-09-28 01:31:02.286730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:06.563 [2024-09-28 01:31:02.286741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.586 ms 00:18:06.563 [2024-09-28 01:31:02.286748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.563 [2024-09-28 01:31:02.309147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.563 [2024-09-28 01:31:02.309181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:06.563 [2024-09-28 01:31:02.309199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.344 ms 00:18:06.563 [2024-09-28 01:31:02.309207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.563 [2024-09-28 01:31:02.309238] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:06.563 [2024-09-28 01:31:02.309252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:06.563 [2024-09-28 01:31:02.309413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.309997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:06.564 [2024-09-28 01:31:02.310012] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:06.564 [2024-09-28 01:31:02.310019] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 80515e57-70d8-4c65-ba65-e59d4cf5e2ab 00:18:06.564 [2024-09-28 01:31:02.310027] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:06.564 [2024-09-28 01:31:02.310034] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:06.564 [2024-09-28 01:31:02.310040] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:06.564 [2024-09-28 01:31:02.310047] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:06.564 [2024-09-28 01:31:02.310054] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:06.564 [2024-09-28 01:31:02.310066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:06.564 [2024-09-28 01:31:02.310073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:06.564 [2024-09-28 01:31:02.310079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:06.564 [2024-09-28 01:31:02.310085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:06.564 [2024-09-28 01:31:02.310092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.564 [2024-09-28 01:31:02.310106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:06.564 [2024-09-28 01:31:02.310114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:18:06.565 [2024-09-28 01:31:02.310121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.322261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.565 [2024-09-28 01:31:02.322296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:06.565 [2024-09-28 01:31:02.322306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.124 ms 00:18:06.565 [2024-09-28 01:31:02.322317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.322652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.565 [2024-09-28 01:31:02.322666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:06.565 [2024-09-28 01:31:02.322674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:18:06.565 [2024-09-28 01:31:02.322681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.350372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.350411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.565 [2024-09-28 01:31:02.350421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.350433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.350492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.350500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.565 [2024-09-28 01:31:02.350507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.350515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.350565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.350575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.565 [2024-09-28 01:31:02.350583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.350590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.350607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.350615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.565 [2024-09-28 01:31:02.350623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.350630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.426708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.426755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.565 [2024-09-28 01:31:02.426766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.426779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.489593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.489640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.565 [2024-09-28 01:31:02.489650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.489657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.489717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.489725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.565 [2024-09-28 01:31:02.489733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.489741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.489777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.489785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.565 [2024-09-28 01:31:02.489793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.489800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.489887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.489896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.565 [2024-09-28 01:31:02.489904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.489911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.489936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.489947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:06.565 [2024-09-28 01:31:02.489955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.489962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.489994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.490002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.565 [2024-09-28 01:31:02.490010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.490018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.490057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.565 [2024-09-28 01:31:02.490066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.565 [2024-09-28 01:31:02.490074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.565 [2024-09-28 01:31:02.490081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.565 [2024-09-28 01:31:02.490187] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 325.902 ms, result 0 00:18:07.499 00:18:07.499 00:18:07.499 01:31:03 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:09.397 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:18:09.397 01:31:05 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:18:09.397 [2024-09-28 01:31:05.301886] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:09.397 [2024-09-28 01:31:05.302007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75325 ] 00:18:09.654 [2024-09-28 01:31:05.442257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.911 [2024-09-28 01:31:05.622237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.169 [2024-09-28 01:31:05.885844] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.169 [2024-09-28 01:31:05.885910] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.169 [2024-09-28 01:31:06.038950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.169 [2024-09-28 01:31:06.038996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:10.169 [2024-09-28 01:31:06.039006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:10.169 [2024-09-28 01:31:06.039015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.169 [2024-09-28 01:31:06.039049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.169 [2024-09-28 01:31:06.039057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.169 [2024-09-28 01:31:06.039063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:10.169 [2024-09-28 01:31:06.039069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.169 [2024-09-28 01:31:06.039081] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:10.169 [2024-09-28 01:31:06.039614] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:10.169 [2024-09-28 01:31:06.039635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.169 [2024-09-28 01:31:06.039642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.169 [2024-09-28 01:31:06.039648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:18:10.169 [2024-09-28 01:31:06.039654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.169 [2024-09-28 01:31:06.040586] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:10.169 [2024-09-28 01:31:06.050172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.169 [2024-09-28 01:31:06.050214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:10.169 [2024-09-28 01:31:06.050228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.587 ms 00:18:10.169 [2024-09-28 01:31:06.050235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.169 [2024-09-28 01:31:06.050280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.169 [2024-09-28 01:31:06.050288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:10.169 [2024-09-28 01:31:06.050294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:10.169 [2024-09-28 01:31:06.050300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.169 [2024-09-28 01:31:06.054531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.169 [2024-09-28 01:31:06.054559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.169 [2024-09-28 01:31:06.054567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.185 ms 00:18:10.169 [2024-09-28 01:31:06.054573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.169 [2024-09-28 01:31:06.054626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.169 [2024-09-28 01:31:06.054633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.169 [2024-09-28 01:31:06.054640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:10.169 [2024-09-28 01:31:06.054645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.169 [2024-09-28 01:31:06.054681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.169 [2024-09-28 01:31:06.054688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:10.169 [2024-09-28 01:31:06.054694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:10.169 [2024-09-28 01:31:06.054700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.170 [2024-09-28 01:31:06.054714] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:10.170 [2024-09-28 01:31:06.057307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.170 [2024-09-28 01:31:06.057329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.170 [2024-09-28 01:31:06.057337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.597 ms 00:18:10.170 [2024-09-28 01:31:06.057343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.170 [2024-09-28 01:31:06.057368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.170 [2024-09-28 01:31:06.057375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:10.170 [2024-09-28 01:31:06.057381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:10.170 [2024-09-28 01:31:06.057387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.170 [2024-09-28 01:31:06.057404] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:10.170 [2024-09-28 01:31:06.057418] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:10.170 [2024-09-28 01:31:06.057445] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:10.170 [2024-09-28 01:31:06.057457] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:10.170 [2024-09-28 01:31:06.057536] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:10.170 [2024-09-28 01:31:06.057544] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:10.170 [2024-09-28 01:31:06.057552] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:10.170 [2024-09-28 01:31:06.057561] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:10.170 [2024-09-28 01:31:06.057568] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:10.170 [2024-09-28 01:31:06.057574] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:10.170 [2024-09-28 01:31:06.057579] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:10.170 [2024-09-28 01:31:06.057585] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:10.170 [2024-09-28 01:31:06.057592] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:10.170 [2024-09-28 01:31:06.057598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.170 [2024-09-28 01:31:06.057603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:10.170 [2024-09-28 01:31:06.057609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:18:10.170 [2024-09-28 01:31:06.057615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.170 [2024-09-28 01:31:06.057677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.170 [2024-09-28 01:31:06.057685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:10.170 [2024-09-28 01:31:06.057691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:10.170 [2024-09-28 01:31:06.057696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.170 [2024-09-28 01:31:06.057772] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:10.170 [2024-09-28 01:31:06.057788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:10.170 [2024-09-28 01:31:06.057794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.170 [2024-09-28 01:31:06.057800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:10.170 [2024-09-28 01:31:06.057811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:10.170 [2024-09-28 01:31:06.057822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:10.170 [2024-09-28 01:31:06.057827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.170 [2024-09-28 01:31:06.057839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:10.170 [2024-09-28 01:31:06.057844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:10.170 [2024-09-28 01:31:06.057849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.170 [2024-09-28 01:31:06.057859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:10.170 [2024-09-28 01:31:06.057864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:10.170 [2024-09-28 01:31:06.057869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:10.170 [2024-09-28 01:31:06.057879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:10.170 [2024-09-28 01:31:06.057883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:10.170 [2024-09-28 01:31:06.057894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.170 [2024-09-28 01:31:06.057903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:10.170 [2024-09-28 01:31:06.057908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.170 [2024-09-28 01:31:06.057917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:10.170 [2024-09-28 01:31:06.057922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.170 [2024-09-28 01:31:06.057932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:10.170 [2024-09-28 01:31:06.057936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.170 [2024-09-28 01:31:06.057946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:10.170 [2024-09-28 01:31:06.057951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.170 [2024-09-28 01:31:06.057960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:10.170 [2024-09-28 01:31:06.057965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:10.170 [2024-09-28 01:31:06.057970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.170 [2024-09-28 01:31:06.057975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:10.170 [2024-09-28 01:31:06.057980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:10.170 [2024-09-28 01:31:06.057984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.170 [2024-09-28 01:31:06.057989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:10.170 [2024-09-28 01:31:06.057995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:10.170 [2024-09-28 01:31:06.058000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.170 [2024-09-28 01:31:06.058004] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:10.170 [2024-09-28 01:31:06.058010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:10.170 [2024-09-28 01:31:06.058017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.170 [2024-09-28 01:31:06.058022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.170 [2024-09-28 01:31:06.058028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:10.170 [2024-09-28 01:31:06.058033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:10.170 [2024-09-28 01:31:06.058038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:10.170 [2024-09-28 01:31:06.058043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:10.170 [2024-09-28 01:31:06.058048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:10.170 [2024-09-28 01:31:06.058053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:10.170 [2024-09-28 01:31:06.058059] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:10.170 [2024-09-28 01:31:06.058066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.170 [2024-09-28 01:31:06.058072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:10.170 [2024-09-28 01:31:06.058077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:10.170 [2024-09-28 01:31:06.058082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:10.170 [2024-09-28 01:31:06.058088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:10.170 [2024-09-28 01:31:06.058093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:10.170 [2024-09-28 01:31:06.058099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:10.170 [2024-09-28 01:31:06.058104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:10.170 [2024-09-28 01:31:06.058109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:10.170 [2024-09-28 01:31:06.058114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:10.170 [2024-09-28 01:31:06.058119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:10.170 [2024-09-28 01:31:06.058124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:10.170 [2024-09-28 01:31:06.058130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:10.170 [2024-09-28 01:31:06.058135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:10.170 [2024-09-28 01:31:06.058140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:10.170 [2024-09-28 01:31:06.058146] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:10.170 [2024-09-28 01:31:06.058152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.170 [2024-09-28 01:31:06.058158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:10.170 [2024-09-28 01:31:06.058164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:10.170 [2024-09-28 01:31:06.058170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:10.170 [2024-09-28 01:31:06.058175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:10.170 [2024-09-28 01:31:06.058182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.170 [2024-09-28 01:31:06.058187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:10.170 [2024-09-28 01:31:06.058206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:18:10.170 [2024-09-28 01:31:06.058214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.170 [2024-09-28 01:31:06.092947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.171 [2024-09-28 01:31:06.092997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.171 [2024-09-28 01:31:06.093012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.688 ms 00:18:10.171 [2024-09-28 01:31:06.093022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.171 [2024-09-28 01:31:06.093137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.171 [2024-09-28 01:31:06.093149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:10.171 [2024-09-28 01:31:06.093160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:10.171 [2024-09-28 01:31:06.093169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.117465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.117494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.428 [2024-09-28 01:31:06.117505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.191 ms 00:18:10.428 [2024-09-28 01:31:06.117511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.117541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.117547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.428 [2024-09-28 01:31:06.117553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:10.428 [2024-09-28 01:31:06.117560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.117868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.117897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.428 [2024-09-28 01:31:06.117906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:18:10.428 [2024-09-28 01:31:06.117915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.118014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.118020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.428 [2024-09-28 01:31:06.118027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:10.428 [2024-09-28 01:31:06.118032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.127952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.127980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.428 [2024-09-28 01:31:06.127987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.905 ms 00:18:10.428 [2024-09-28 01:31:06.127993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.137714] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:10.428 [2024-09-28 01:31:06.137741] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:10.428 [2024-09-28 01:31:06.137750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.137757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:10.428 [2024-09-28 01:31:06.137764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.683 ms 00:18:10.428 [2024-09-28 01:31:06.137770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.156111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.156142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:10.428 [2024-09-28 01:31:06.156151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.308 ms 00:18:10.428 [2024-09-28 01:31:06.156159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.165019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.165046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:10.428 [2024-09-28 01:31:06.165053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.837 ms 00:18:10.428 [2024-09-28 01:31:06.165059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.173664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.173690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:10.428 [2024-09-28 01:31:06.173698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.572 ms 00:18:10.428 [2024-09-28 01:31:06.173704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.174165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.174182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:10.428 [2024-09-28 01:31:06.174189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:18:10.428 [2024-09-28 01:31:06.174209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.217741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.217783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:10.428 [2024-09-28 01:31:06.217793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.518 ms 00:18:10.428 [2024-09-28 01:31:06.217800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.225540] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:10.428 [2024-09-28 01:31:06.227555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.227582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:10.428 [2024-09-28 01:31:06.227592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.714 ms 00:18:10.428 [2024-09-28 01:31:06.227602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.227669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.227678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:10.428 [2024-09-28 01:31:06.227686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:10.428 [2024-09-28 01:31:06.227692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.227736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.227744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:10.428 [2024-09-28 01:31:06.227750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:10.428 [2024-09-28 01:31:06.227756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.227772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.227779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:10.428 [2024-09-28 01:31:06.227785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.428 [2024-09-28 01:31:06.227791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.227815] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:10.428 [2024-09-28 01:31:06.227822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.227828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:10.428 [2024-09-28 01:31:06.227836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:10.428 [2024-09-28 01:31:06.227842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.246037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.246069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:10.428 [2024-09-28 01:31:06.246080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.180 ms 00:18:10.428 [2024-09-28 01:31:06.246086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.246150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.428 [2024-09-28 01:31:06.246159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:10.428 [2024-09-28 01:31:06.246165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:10.428 [2024-09-28 01:31:06.246171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.428 [2024-09-28 01:31:06.246918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 207.635 ms, result 0 00:18:32.316  Copying: 48/1024 [MB] (48 MBps) Copying: 94/1024 [MB] (45 MBps) Copying: 146/1024 [MB] (51 MBps) Copying: 200/1024 [MB] (53 MBps) Copying: 255/1024 [MB] (54 MBps) Copying: 309/1024 [MB] (54 MBps) Copying: 356/1024 [MB] (46 MBps) Copying: 405/1024 [MB] (48 MBps) Copying: 459/1024 [MB] (54 MBps) Copying: 513/1024 [MB] (53 MBps) Copying: 559/1024 [MB] (46 MBps) Copying: 605/1024 [MB] (46 MBps) Copying: 652/1024 [MB] (46 MBps) Copying: 699/1024 [MB] (46 MBps) Copying: 747/1024 [MB] (48 MBps) Copying: 795/1024 [MB] (47 MBps) Copying: 842/1024 [MB] (47 MBps) Copying: 889/1024 [MB] (47 MBps) Copying: 937/1024 [MB] (47 MBps) Copying: 984/1024 [MB] (47 MBps) Copying: 1023/1024 [MB] (38 MBps) Copying: 1024/1024 [MB] (average 46 MBps)[2024-09-28 01:31:28.130355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.316 [2024-09-28 01:31:28.130411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:32.316 [2024-09-28 01:31:28.130426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:32.316 [2024-09-28 01:31:28.130434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.316 [2024-09-28 01:31:28.133748] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:32.316 [2024-09-28 01:31:28.137121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.316 [2024-09-28 01:31:28.137153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:32.316 [2024-09-28 01:31:28.137165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:18:32.316 [2024-09-28 01:31:28.137178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.316 [2024-09-28 01:31:28.149063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.316 [2024-09-28 01:31:28.149094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:32.316 [2024-09-28 01:31:28.149105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.944 ms 00:18:32.316 [2024-09-28 01:31:28.149112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.316 [2024-09-28 01:31:28.167003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.316 [2024-09-28 01:31:28.167035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:32.316 [2024-09-28 01:31:28.167044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.876 ms 00:18:32.316 [2024-09-28 01:31:28.167052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.316 [2024-09-28 01:31:28.173230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.316 [2024-09-28 01:31:28.173257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:32.316 [2024-09-28 01:31:28.173268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.150 ms 00:18:32.316 [2024-09-28 01:31:28.173276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.316 [2024-09-28 01:31:28.196696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.316 [2024-09-28 01:31:28.196730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:32.316 [2024-09-28 01:31:28.196740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.379 ms 00:18:32.316 [2024-09-28 01:31:28.196747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.316 [2024-09-28 01:31:28.210743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.316 [2024-09-28 01:31:28.210776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:32.316 [2024-09-28 01:31:28.210787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.965 ms 00:18:32.316 [2024-09-28 01:31:28.210794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.575 [2024-09-28 01:31:28.260056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.575 [2024-09-28 01:31:28.260107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:32.575 [2024-09-28 01:31:28.260119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.224 ms 00:18:32.575 [2024-09-28 01:31:28.260126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.575 [2024-09-28 01:31:28.283228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.575 [2024-09-28 01:31:28.283263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:32.575 [2024-09-28 01:31:28.283274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.086 ms 00:18:32.575 [2024-09-28 01:31:28.283282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.575 [2024-09-28 01:31:28.305944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.575 [2024-09-28 01:31:28.305978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:32.575 [2024-09-28 01:31:28.305988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.630 ms 00:18:32.575 [2024-09-28 01:31:28.305995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.575 [2024-09-28 01:31:28.328138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.575 [2024-09-28 01:31:28.328169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:32.575 [2024-09-28 01:31:28.328179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.112 ms 00:18:32.575 [2024-09-28 01:31:28.328186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.575 [2024-09-28 01:31:28.350571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.575 [2024-09-28 01:31:28.350605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:32.575 [2024-09-28 01:31:28.350615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.327 ms 00:18:32.575 [2024-09-28 01:31:28.350621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.575 [2024-09-28 01:31:28.350651] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:32.575 [2024-09-28 01:31:28.350665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 123136 / 261120 wr_cnt: 1 state: open 00:18:32.575 [2024-09-28 01:31:28.350675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:32.575 [2024-09-28 01:31:28.350817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.350995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:32.576 [2024-09-28 01:31:28.351410] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:32.576 [2024-09-28 01:31:28.351418] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 80515e57-70d8-4c65-ba65-e59d4cf5e2ab 00:18:32.576 [2024-09-28 01:31:28.351429] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 123136 00:18:32.576 [2024-09-28 01:31:28.351436] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 124096 00:18:32.576 [2024-09-28 01:31:28.351443] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 123136 00:18:32.576 [2024-09-28 01:31:28.351451] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0078 00:18:32.576 [2024-09-28 01:31:28.351457] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:32.576 [2024-09-28 01:31:28.351465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:32.576 [2024-09-28 01:31:28.351472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:32.576 [2024-09-28 01:31:28.351479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:32.576 [2024-09-28 01:31:28.351484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:32.576 [2024-09-28 01:31:28.351492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.576 [2024-09-28 01:31:28.351505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:32.576 [2024-09-28 01:31:28.351512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:18:32.576 [2024-09-28 01:31:28.351519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.577 [2024-09-28 01:31:28.363615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.577 [2024-09-28 01:31:28.363646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:32.577 [2024-09-28 01:31:28.363656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.080 ms 00:18:32.577 [2024-09-28 01:31:28.363664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.577 [2024-09-28 01:31:28.363995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.577 [2024-09-28 01:31:28.364009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:32.577 [2024-09-28 01:31:28.364022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:18:32.577 [2024-09-28 01:31:28.364029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.577 [2024-09-28 01:31:28.391548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.577 [2024-09-28 01:31:28.391584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.577 [2024-09-28 01:31:28.391594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.577 [2024-09-28 01:31:28.391601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.577 [2024-09-28 01:31:28.391656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.577 [2024-09-28 01:31:28.391664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.577 [2024-09-28 01:31:28.391675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.577 [2024-09-28 01:31:28.391683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.577 [2024-09-28 01:31:28.391736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.577 [2024-09-28 01:31:28.391745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.577 [2024-09-28 01:31:28.391753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.577 [2024-09-28 01:31:28.391760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.577 [2024-09-28 01:31:28.391775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.577 [2024-09-28 01:31:28.391782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.577 [2024-09-28 01:31:28.391789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.577 [2024-09-28 01:31:28.391800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.577 [2024-09-28 01:31:28.468059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.577 [2024-09-28 01:31:28.468104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.577 [2024-09-28 01:31:28.468115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.577 [2024-09-28 01:31:28.468122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.835 [2024-09-28 01:31:28.530920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.835 [2024-09-28 01:31:28.530967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.835 [2024-09-28 01:31:28.530977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.835 [2024-09-28 01:31:28.530989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.835 [2024-09-28 01:31:28.531051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.835 [2024-09-28 01:31:28.531060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:32.835 [2024-09-28 01:31:28.531068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.835 [2024-09-28 01:31:28.531075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.835 [2024-09-28 01:31:28.531108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.835 [2024-09-28 01:31:28.531116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:32.835 [2024-09-28 01:31:28.531124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.835 [2024-09-28 01:31:28.531131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.835 [2024-09-28 01:31:28.531230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.835 [2024-09-28 01:31:28.531241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:32.835 [2024-09-28 01:31:28.531249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.835 [2024-09-28 01:31:28.531256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.835 [2024-09-28 01:31:28.531286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.835 [2024-09-28 01:31:28.531294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:32.835 [2024-09-28 01:31:28.531301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.835 [2024-09-28 01:31:28.531308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.835 [2024-09-28 01:31:28.531341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.835 [2024-09-28 01:31:28.531350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:32.835 [2024-09-28 01:31:28.531357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.835 [2024-09-28 01:31:28.531364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.835 [2024-09-28 01:31:28.531401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.835 [2024-09-28 01:31:28.531410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:32.835 [2024-09-28 01:31:28.531417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.835 [2024-09-28 01:31:28.531424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.835 [2024-09-28 01:31:28.531527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.199 ms, result 0 00:18:34.739 00:18:34.739 00:18:34.739 01:31:30 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:18:34.739 [2024-09-28 01:31:30.638351] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:34.739 [2024-09-28 01:31:30.638480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75576 ] 00:18:34.998 [2024-09-28 01:31:30.788832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.257 [2024-09-28 01:31:30.967444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.566 [2024-09-28 01:31:31.215390] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.566 [2024-09-28 01:31:31.215456] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.566 [2024-09-28 01:31:31.368618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.368671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:35.566 [2024-09-28 01:31:31.368683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:35.566 [2024-09-28 01:31:31.368695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.368736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.368746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.566 [2024-09-28 01:31:31.368754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:35.566 [2024-09-28 01:31:31.368762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.368781] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:35.566 [2024-09-28 01:31:31.369436] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:35.566 [2024-09-28 01:31:31.369459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.369467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.566 [2024-09-28 01:31:31.369475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:18:35.566 [2024-09-28 01:31:31.369482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.370469] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:35.566 [2024-09-28 01:31:31.382586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.382619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:35.566 [2024-09-28 01:31:31.382631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.118 ms 00:18:35.566 [2024-09-28 01:31:31.382639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.382690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.382699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:35.566 [2024-09-28 01:31:31.382707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:35.566 [2024-09-28 01:31:31.382714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.387177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.387215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.566 [2024-09-28 01:31:31.387224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.407 ms 00:18:35.566 [2024-09-28 01:31:31.387231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.387303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.387312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.566 [2024-09-28 01:31:31.387320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:35.566 [2024-09-28 01:31:31.387327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.387366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.387374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:35.566 [2024-09-28 01:31:31.387382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:35.566 [2024-09-28 01:31:31.387390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.387409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.566 [2024-09-28 01:31:31.390712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.390739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.566 [2024-09-28 01:31:31.390748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.308 ms 00:18:35.566 [2024-09-28 01:31:31.390755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.390782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.390791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:35.566 [2024-09-28 01:31:31.390798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:35.566 [2024-09-28 01:31:31.390805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.390826] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:35.566 [2024-09-28 01:31:31.390843] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:35.566 [2024-09-28 01:31:31.390877] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:35.566 [2024-09-28 01:31:31.390892] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:35.566 [2024-09-28 01:31:31.390991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:35.566 [2024-09-28 01:31:31.391008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:35.566 [2024-09-28 01:31:31.391018] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:35.566 [2024-09-28 01:31:31.391031] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:35.566 [2024-09-28 01:31:31.391040] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:35.566 [2024-09-28 01:31:31.391047] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:35.566 [2024-09-28 01:31:31.391055] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:35.566 [2024-09-28 01:31:31.391062] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:35.566 [2024-09-28 01:31:31.391069] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:35.566 [2024-09-28 01:31:31.391076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.391083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:35.566 [2024-09-28 01:31:31.391091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:35.566 [2024-09-28 01:31:31.391097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.391183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.566 [2024-09-28 01:31:31.391208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:35.566 [2024-09-28 01:31:31.391217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:35.566 [2024-09-28 01:31:31.391224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.566 [2024-09-28 01:31:31.391324] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:35.566 [2024-09-28 01:31:31.391334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:35.566 [2024-09-28 01:31:31.391342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.566 [2024-09-28 01:31:31.391350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.566 [2024-09-28 01:31:31.391357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:35.566 [2024-09-28 01:31:31.391363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:35.566 [2024-09-28 01:31:31.391371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:35.566 [2024-09-28 01:31:31.391378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:35.566 [2024-09-28 01:31:31.391385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:35.566 [2024-09-28 01:31:31.391391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.566 [2024-09-28 01:31:31.391398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:35.566 [2024-09-28 01:31:31.391405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:35.566 [2024-09-28 01:31:31.391411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.566 [2024-09-28 01:31:31.391422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:35.567 [2024-09-28 01:31:31.391429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:35.567 [2024-09-28 01:31:31.391435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.567 [2024-09-28 01:31:31.391442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:35.567 [2024-09-28 01:31:31.391448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:35.567 [2024-09-28 01:31:31.391455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.567 [2024-09-28 01:31:31.391461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:35.567 [2024-09-28 01:31:31.391467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:35.567 [2024-09-28 01:31:31.391473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.567 [2024-09-28 01:31:31.391480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:35.567 [2024-09-28 01:31:31.391486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:35.567 [2024-09-28 01:31:31.391492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.567 [2024-09-28 01:31:31.391499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:35.567 [2024-09-28 01:31:31.391505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:35.567 [2024-09-28 01:31:31.391511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.567 [2024-09-28 01:31:31.391517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:35.567 [2024-09-28 01:31:31.391524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:35.567 [2024-09-28 01:31:31.391529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.567 [2024-09-28 01:31:31.391536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:35.567 [2024-09-28 01:31:31.391542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:35.567 [2024-09-28 01:31:31.391548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.567 [2024-09-28 01:31:31.391554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:35.567 [2024-09-28 01:31:31.391560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:35.567 [2024-09-28 01:31:31.391566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.567 [2024-09-28 01:31:31.391572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:35.567 [2024-09-28 01:31:31.391580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:35.567 [2024-09-28 01:31:31.391587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.567 [2024-09-28 01:31:31.391593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:35.567 [2024-09-28 01:31:31.391599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:35.567 [2024-09-28 01:31:31.391605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.567 [2024-09-28 01:31:31.391612] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:35.567 [2024-09-28 01:31:31.391619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:35.567 [2024-09-28 01:31:31.391627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.567 [2024-09-28 01:31:31.391634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.567 [2024-09-28 01:31:31.391641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:35.567 [2024-09-28 01:31:31.391648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:35.567 [2024-09-28 01:31:31.391654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:35.567 [2024-09-28 01:31:31.391661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:35.567 [2024-09-28 01:31:31.391666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:35.567 [2024-09-28 01:31:31.391673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:35.567 [2024-09-28 01:31:31.391681] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:35.567 [2024-09-28 01:31:31.391689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.567 [2024-09-28 01:31:31.391698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:35.567 [2024-09-28 01:31:31.391704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:35.567 [2024-09-28 01:31:31.391711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:35.567 [2024-09-28 01:31:31.391718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:35.567 [2024-09-28 01:31:31.391725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:35.567 [2024-09-28 01:31:31.391732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:35.567 [2024-09-28 01:31:31.391738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:35.567 [2024-09-28 01:31:31.391745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:35.567 [2024-09-28 01:31:31.391752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:35.567 [2024-09-28 01:31:31.391759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:35.567 [2024-09-28 01:31:31.391765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:35.567 [2024-09-28 01:31:31.391772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:35.567 [2024-09-28 01:31:31.391779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:35.567 [2024-09-28 01:31:31.391786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:35.567 [2024-09-28 01:31:31.391793] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:35.567 [2024-09-28 01:31:31.391801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.567 [2024-09-28 01:31:31.391809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:35.567 [2024-09-28 01:31:31.391817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:35.567 [2024-09-28 01:31:31.391824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:35.567 [2024-09-28 01:31:31.391831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:35.567 [2024-09-28 01:31:31.391838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.567 [2024-09-28 01:31:31.391845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:35.567 [2024-09-28 01:31:31.391852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:18:35.567 [2024-09-28 01:31:31.391858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.567 [2024-09-28 01:31:31.436325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.567 [2024-09-28 01:31:31.436366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.567 [2024-09-28 01:31:31.436378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.410 ms 00:18:35.567 [2024-09-28 01:31:31.436387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.567 [2024-09-28 01:31:31.436482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.567 [2024-09-28 01:31:31.436491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:35.567 [2024-09-28 01:31:31.436499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:35.567 [2024-09-28 01:31:31.436506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.859 [2024-09-28 01:31:31.466404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.859 [2024-09-28 01:31:31.466444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.859 [2024-09-28 01:31:31.466458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.838 ms 00:18:35.859 [2024-09-28 01:31:31.466465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.859 [2024-09-28 01:31:31.466503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.859 [2024-09-28 01:31:31.466511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.859 [2024-09-28 01:31:31.466519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:35.859 [2024-09-28 01:31:31.466526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.859 [2024-09-28 01:31:31.466877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.859 [2024-09-28 01:31:31.466900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.859 [2024-09-28 01:31:31.466909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:18:35.859 [2024-09-28 01:31:31.466920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.859 [2024-09-28 01:31:31.467041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.859 [2024-09-28 01:31:31.467054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.859 [2024-09-28 01:31:31.467062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:35.859 [2024-09-28 01:31:31.467069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.859 [2024-09-28 01:31:31.479242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.859 [2024-09-28 01:31:31.479271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.859 [2024-09-28 01:31:31.479281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.152 ms 00:18:35.859 [2024-09-28 01:31:31.479288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.859 [2024-09-28 01:31:31.491654] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:18:35.860 [2024-09-28 01:31:31.491687] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:35.860 [2024-09-28 01:31:31.491698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.491706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:35.860 [2024-09-28 01:31:31.491716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.316 ms 00:18:35.860 [2024-09-28 01:31:31.491723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.515778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.515819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:35.860 [2024-09-28 01:31:31.515830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.018 ms 00:18:35.860 [2024-09-28 01:31:31.515838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.527552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.527587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:35.860 [2024-09-28 01:31:31.527597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.666 ms 00:18:35.860 [2024-09-28 01:31:31.527605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.538498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.538528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:35.860 [2024-09-28 01:31:31.538538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.859 ms 00:18:35.860 [2024-09-28 01:31:31.538546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.539146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.539171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:35.860 [2024-09-28 01:31:31.539181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:18:35.860 [2024-09-28 01:31:31.539188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.593065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.593114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:35.860 [2024-09-28 01:31:31.593126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.850 ms 00:18:35.860 [2024-09-28 01:31:31.593134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.603395] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:35.860 [2024-09-28 01:31:31.605719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.605748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:35.860 [2024-09-28 01:31:31.605760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.531 ms 00:18:35.860 [2024-09-28 01:31:31.605771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.605858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.605868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:35.860 [2024-09-28 01:31:31.605877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:35.860 [2024-09-28 01:31:31.605884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.607225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.607254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:35.860 [2024-09-28 01:31:31.607264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:18:35.860 [2024-09-28 01:31:31.607273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.607300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.607310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:35.860 [2024-09-28 01:31:31.607319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:35.860 [2024-09-28 01:31:31.607326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.607358] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:35.860 [2024-09-28 01:31:31.607369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.607377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:35.860 [2024-09-28 01:31:31.607388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:35.860 [2024-09-28 01:31:31.607397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.629908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.629941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:35.860 [2024-09-28 01:31:31.629952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.494 ms 00:18:35.860 [2024-09-28 01:31:31.629960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.630029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.860 [2024-09-28 01:31:31.630039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:35.860 [2024-09-28 01:31:31.630047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:35.860 [2024-09-28 01:31:31.630054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.860 [2024-09-28 01:31:31.631189] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 262.164 ms, result 0 00:18:57.211  Copying: 44/1024 [MB] (44 MBps) Copying: 93/1024 [MB] (48 MBps) Copying: 143/1024 [MB] (49 MBps) Copying: 194/1024 [MB] (51 MBps) Copying: 244/1024 [MB] (49 MBps) Copying: 292/1024 [MB] (48 MBps) Copying: 341/1024 [MB] (48 MBps) Copying: 388/1024 [MB] (47 MBps) Copying: 438/1024 [MB] (49 MBps) Copying: 485/1024 [MB] (47 MBps) Copying: 532/1024 [MB] (47 MBps) Copying: 581/1024 [MB] (48 MBps) Copying: 630/1024 [MB] (49 MBps) Copying: 680/1024 [MB] (49 MBps) Copying: 732/1024 [MB] (52 MBps) Copying: 779/1024 [MB] (46 MBps) Copying: 826/1024 [MB] (47 MBps) Copying: 876/1024 [MB] (50 MBps) Copying: 924/1024 [MB] (47 MBps) Copying: 974/1024 [MB] (50 MBps) Copying: 1024/1024 [MB] (average 48 MBps)[2024-09-28 01:31:53.086879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.211 [2024-09-28 01:31:53.086942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:57.211 [2024-09-28 01:31:53.086955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:57.211 [2024-09-28 01:31:53.086963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.211 [2024-09-28 01:31:53.086983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:57.211 [2024-09-28 01:31:53.089625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.211 [2024-09-28 01:31:53.089656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:57.211 [2024-09-28 01:31:53.089666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.628 ms 00:18:57.211 [2024-09-28 01:31:53.089678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.211 [2024-09-28 01:31:53.091955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.211 [2024-09-28 01:31:53.091983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:57.211 [2024-09-28 01:31:53.091992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:18:57.211 [2024-09-28 01:31:53.092000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.211 [2024-09-28 01:31:53.095957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.211 [2024-09-28 01:31:53.095989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:57.211 [2024-09-28 01:31:53.095999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.942 ms 00:18:57.211 [2024-09-28 01:31:53.096007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.211 [2024-09-28 01:31:53.102233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.211 [2024-09-28 01:31:53.102262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:57.211 [2024-09-28 01:31:53.102271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.193 ms 00:18:57.211 [2024-09-28 01:31:53.102278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.211 [2024-09-28 01:31:53.126556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.211 [2024-09-28 01:31:53.126589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:57.211 [2024-09-28 01:31:53.126600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.221 ms 00:18:57.211 [2024-09-28 01:31:53.126607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.211 [2024-09-28 01:31:53.139746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.211 [2024-09-28 01:31:53.139776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:57.211 [2024-09-28 01:31:53.139787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.105 ms 00:18:57.211 [2024-09-28 01:31:53.139795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.470 [2024-09-28 01:31:53.190780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.470 [2024-09-28 01:31:53.190825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:57.470 [2024-09-28 01:31:53.190840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.948 ms 00:18:57.470 [2024-09-28 01:31:53.190848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.470 [2024-09-28 01:31:53.213277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.470 [2024-09-28 01:31:53.213309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:57.470 [2024-09-28 01:31:53.213319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.415 ms 00:18:57.470 [2024-09-28 01:31:53.213327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.470 [2024-09-28 01:31:53.235713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.470 [2024-09-28 01:31:53.235742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:57.470 [2024-09-28 01:31:53.235752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.356 ms 00:18:57.470 [2024-09-28 01:31:53.235758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.470 [2024-09-28 01:31:53.257904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.470 [2024-09-28 01:31:53.257932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:57.470 [2024-09-28 01:31:53.257942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.116 ms 00:18:57.470 [2024-09-28 01:31:53.257949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.470 [2024-09-28 01:31:53.280697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.470 [2024-09-28 01:31:53.280730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:57.470 [2024-09-28 01:31:53.280739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.696 ms 00:18:57.470 [2024-09-28 01:31:53.280747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.470 [2024-09-28 01:31:53.280779] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:57.470 [2024-09-28 01:31:53.280793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:18:57.470 [2024-09-28 01:31:53.280803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:57.470 [2024-09-28 01:31:53.280820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.280996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:57.471 [2024-09-28 01:31:53.281505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:57.472 [2024-09-28 01:31:53.281513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:57.472 [2024-09-28 01:31:53.281520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:57.472 [2024-09-28 01:31:53.281529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:57.472 [2024-09-28 01:31:53.281536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:57.472 [2024-09-28 01:31:53.281543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:57.472 [2024-09-28 01:31:53.281552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:57.472 [2024-09-28 01:31:53.281560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:57.472 [2024-09-28 01:31:53.281575] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:57.472 [2024-09-28 01:31:53.281583] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 80515e57-70d8-4c65-ba65-e59d4cf5e2ab 00:18:57.472 [2024-09-28 01:31:53.281594] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:18:57.472 [2024-09-28 01:31:53.281601] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 8896 00:18:57.472 [2024-09-28 01:31:53.281608] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 7936 00:18:57.472 [2024-09-28 01:31:53.281616] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1210 00:18:57.472 [2024-09-28 01:31:53.281623] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:57.472 [2024-09-28 01:31:53.281630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:57.472 [2024-09-28 01:31:53.281637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:57.472 [2024-09-28 01:31:53.281643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:57.472 [2024-09-28 01:31:53.281649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:57.472 [2024-09-28 01:31:53.281656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.472 [2024-09-28 01:31:53.281663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:57.472 [2024-09-28 01:31:53.281676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:18:57.472 [2024-09-28 01:31:53.281683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.472 [2024-09-28 01:31:53.294244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.472 [2024-09-28 01:31:53.294274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:57.472 [2024-09-28 01:31:53.294285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.544 ms 00:18:57.472 [2024-09-28 01:31:53.294294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.472 [2024-09-28 01:31:53.294633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.472 [2024-09-28 01:31:53.294649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:57.472 [2024-09-28 01:31:53.294661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:18:57.472 [2024-09-28 01:31:53.294669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.472 [2024-09-28 01:31:53.322583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.472 [2024-09-28 01:31:53.322615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.472 [2024-09-28 01:31:53.322625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.472 [2024-09-28 01:31:53.322632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.472 [2024-09-28 01:31:53.322686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.472 [2024-09-28 01:31:53.322694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.472 [2024-09-28 01:31:53.322701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.472 [2024-09-28 01:31:53.322713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.472 [2024-09-28 01:31:53.322766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.472 [2024-09-28 01:31:53.322776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.472 [2024-09-28 01:31:53.322784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.472 [2024-09-28 01:31:53.322791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.472 [2024-09-28 01:31:53.322804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.472 [2024-09-28 01:31:53.322812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.472 [2024-09-28 01:31:53.322819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.472 [2024-09-28 01:31:53.322826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.732 [2024-09-28 01:31:53.422866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.732 [2024-09-28 01:31:53.422916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.732 [2024-09-28 01:31:53.422932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.732 [2024-09-28 01:31:53.422943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.732 [2024-09-28 01:31:53.525218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.732 [2024-09-28 01:31:53.525275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.732 [2024-09-28 01:31:53.525294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.732 [2024-09-28 01:31:53.525311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.732 [2024-09-28 01:31:53.525395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.732 [2024-09-28 01:31:53.525409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.732 [2024-09-28 01:31:53.525422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.732 [2024-09-28 01:31:53.525433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.732 [2024-09-28 01:31:53.525486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.732 [2024-09-28 01:31:53.525500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.732 [2024-09-28 01:31:53.525511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.732 [2024-09-28 01:31:53.525523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.732 [2024-09-28 01:31:53.525650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.732 [2024-09-28 01:31:53.525676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.732 [2024-09-28 01:31:53.525688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.732 [2024-09-28 01:31:53.525700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.732 [2024-09-28 01:31:53.525745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.732 [2024-09-28 01:31:53.525758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:57.732 [2024-09-28 01:31:53.525770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.732 [2024-09-28 01:31:53.525781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.732 [2024-09-28 01:31:53.525832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.732 [2024-09-28 01:31:53.525845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:57.732 [2024-09-28 01:31:53.525857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.732 [2024-09-28 01:31:53.525868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.732 [2024-09-28 01:31:53.525918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.732 [2024-09-28 01:31:53.525933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:57.732 [2024-09-28 01:31:53.525945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.732 [2024-09-28 01:31:53.525956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.732 [2024-09-28 01:31:53.526103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 439.181 ms, result 0 00:18:58.669 00:18:58.669 00:18:58.669 01:31:54 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:00.571 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:00.571 01:31:56 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:19:00.571 01:31:56 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:19:00.571 01:31:56 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:00.829 01:31:56 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:00.829 01:31:56 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:00.829 01:31:56 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74585 00:19:00.829 01:31:56 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74585 ']' 00:19:00.829 Process with pid 74585 is not found 00:19:00.829 Remove shared memory files 00:19:00.829 01:31:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74585 00:19:00.829 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74585) - No such process 00:19:00.829 01:31:56 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74585 is not found' 00:19:00.829 01:31:56 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:19:00.829 01:31:56 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:00.829 01:31:56 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:19:00.829 01:31:56 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:19:00.829 01:31:56 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:19:00.830 01:31:56 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:00.830 01:31:56 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:19:00.830 00:19:00.830 real 2m1.246s 00:19:00.830 user 1m51.565s 00:19:00.830 sys 0m11.347s 00:19:00.830 01:31:56 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:00.830 01:31:56 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:00.830 ************************************ 00:19:00.830 END TEST ftl_restore 00:19:00.830 ************************************ 00:19:00.830 01:31:56 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:19:00.830 01:31:56 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:00.830 01:31:56 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:00.830 01:31:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:00.830 ************************************ 00:19:00.830 START TEST ftl_dirty_shutdown 00:19:00.830 ************************************ 00:19:00.830 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:19:00.830 * Looking for test storage... 00:19:00.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:00.830 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:00.830 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:19:00.830 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:01.088 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:01.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.089 --rc genhtml_branch_coverage=1 00:19:01.089 --rc genhtml_function_coverage=1 00:19:01.089 --rc genhtml_legend=1 00:19:01.089 --rc geninfo_all_blocks=1 00:19:01.089 --rc geninfo_unexecuted_blocks=1 00:19:01.089 00:19:01.089 ' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:01.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.089 --rc genhtml_branch_coverage=1 00:19:01.089 --rc genhtml_function_coverage=1 00:19:01.089 --rc genhtml_legend=1 00:19:01.089 --rc geninfo_all_blocks=1 00:19:01.089 --rc geninfo_unexecuted_blocks=1 00:19:01.089 00:19:01.089 ' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:01.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.089 --rc genhtml_branch_coverage=1 00:19:01.089 --rc genhtml_function_coverage=1 00:19:01.089 --rc genhtml_legend=1 00:19:01.089 --rc geninfo_all_blocks=1 00:19:01.089 --rc geninfo_unexecuted_blocks=1 00:19:01.089 00:19:01.089 ' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:01.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.089 --rc genhtml_branch_coverage=1 00:19:01.089 --rc genhtml_function_coverage=1 00:19:01.089 --rc genhtml_legend=1 00:19:01.089 --rc geninfo_all_blocks=1 00:19:01.089 --rc geninfo_unexecuted_blocks=1 00:19:01.089 00:19:01.089 ' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=75919 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75919 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 75919 ']' 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.089 01:31:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:01.089 [2024-09-28 01:31:56.875030] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:01.089 [2024-09-28 01:31:56.875491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75919 ] 00:19:01.347 [2024-09-28 01:31:57.025767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.347 [2024-09-28 01:31:57.200603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.283 01:31:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.283 01:31:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:19:02.283 01:31:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:02.283 01:31:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:19:02.283 01:31:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:02.283 01:31:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:19:02.283 01:31:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:19:02.283 01:31:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:02.283 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:02.283 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:19:02.283 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:02.283 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:02.283 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:02.283 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:02.283 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:02.283 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:02.541 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:02.541 { 00:19:02.541 "name": "nvme0n1", 00:19:02.541 "aliases": [ 00:19:02.541 "4dd70aef-2e15-44f4-b29f-beba2c647286" 00:19:02.541 ], 00:19:02.541 "product_name": "NVMe disk", 00:19:02.541 "block_size": 4096, 00:19:02.541 "num_blocks": 1310720, 00:19:02.541 "uuid": "4dd70aef-2e15-44f4-b29f-beba2c647286", 00:19:02.541 "numa_id": -1, 00:19:02.541 "assigned_rate_limits": { 00:19:02.541 "rw_ios_per_sec": 0, 00:19:02.541 "rw_mbytes_per_sec": 0, 00:19:02.541 "r_mbytes_per_sec": 0, 00:19:02.541 "w_mbytes_per_sec": 0 00:19:02.541 }, 00:19:02.541 "claimed": true, 00:19:02.541 "claim_type": "read_many_write_one", 00:19:02.541 "zoned": false, 00:19:02.541 "supported_io_types": { 00:19:02.541 "read": true, 00:19:02.541 "write": true, 00:19:02.541 "unmap": true, 00:19:02.541 "flush": true, 00:19:02.541 "reset": true, 00:19:02.541 "nvme_admin": true, 00:19:02.541 "nvme_io": true, 00:19:02.541 "nvme_io_md": false, 00:19:02.541 "write_zeroes": true, 00:19:02.541 "zcopy": false, 00:19:02.541 "get_zone_info": false, 00:19:02.541 "zone_management": false, 00:19:02.541 "zone_append": false, 00:19:02.541 "compare": true, 00:19:02.541 "compare_and_write": false, 00:19:02.541 "abort": true, 00:19:02.541 "seek_hole": false, 00:19:02.541 "seek_data": false, 00:19:02.541 "copy": true, 00:19:02.541 "nvme_iov_md": false 00:19:02.541 }, 00:19:02.542 "driver_specific": { 00:19:02.542 "nvme": [ 00:19:02.542 { 00:19:02.542 "pci_address": "0000:00:11.0", 00:19:02.542 "trid": { 00:19:02.542 "trtype": "PCIe", 00:19:02.542 "traddr": "0000:00:11.0" 00:19:02.542 }, 00:19:02.542 "ctrlr_data": { 00:19:02.542 "cntlid": 0, 00:19:02.542 "vendor_id": "0x1b36", 00:19:02.542 "model_number": "QEMU NVMe Ctrl", 00:19:02.542 "serial_number": "12341", 00:19:02.542 "firmware_revision": "8.0.0", 00:19:02.542 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:02.542 "oacs": { 00:19:02.542 "security": 0, 00:19:02.542 "format": 1, 00:19:02.542 "firmware": 0, 00:19:02.542 "ns_manage": 1 00:19:02.542 }, 00:19:02.542 "multi_ctrlr": false, 00:19:02.542 "ana_reporting": false 00:19:02.542 }, 00:19:02.542 "vs": { 00:19:02.542 "nvme_version": "1.4" 00:19:02.542 }, 00:19:02.542 "ns_data": { 00:19:02.542 "id": 1, 00:19:02.542 "can_share": false 00:19:02.542 } 00:19:02.542 } 00:19:02.542 ], 00:19:02.542 "mp_policy": "active_passive" 00:19:02.542 } 00:19:02.542 } 00:19:02.542 ]' 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:02.542 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:02.801 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=37470b13-3894-4f0c-a98c-4acd423e003c 00:19:02.801 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:19:02.801 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37470b13-3894-4f0c-a98c-4acd423e003c 00:19:03.059 01:31:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=21bd9d3b-4dd8-4cdf-ad00-4db8ca068a76 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 21bd9d3b-4dd8-4cdf-ad00-4db8ca068a76 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=8aad9101-4286-4688-93d9-294d330b94bb 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8aad9101-4286-4688-93d9-294d330b94bb 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=8aad9101-4286-4688-93d9-294d330b94bb 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8aad9101-4286-4688-93d9-294d330b94bb 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8aad9101-4286-4688-93d9-294d330b94bb 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:03.318 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8aad9101-4286-4688-93d9-294d330b94bb 00:19:03.577 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:03.577 { 00:19:03.577 "name": "8aad9101-4286-4688-93d9-294d330b94bb", 00:19:03.577 "aliases": [ 00:19:03.577 "lvs/nvme0n1p0" 00:19:03.577 ], 00:19:03.577 "product_name": "Logical Volume", 00:19:03.577 "block_size": 4096, 00:19:03.577 "num_blocks": 26476544, 00:19:03.577 "uuid": "8aad9101-4286-4688-93d9-294d330b94bb", 00:19:03.577 "assigned_rate_limits": { 00:19:03.577 "rw_ios_per_sec": 0, 00:19:03.577 "rw_mbytes_per_sec": 0, 00:19:03.577 "r_mbytes_per_sec": 0, 00:19:03.577 "w_mbytes_per_sec": 0 00:19:03.577 }, 00:19:03.577 "claimed": false, 00:19:03.577 "zoned": false, 00:19:03.577 "supported_io_types": { 00:19:03.577 "read": true, 00:19:03.577 "write": true, 00:19:03.577 "unmap": true, 00:19:03.577 "flush": false, 00:19:03.577 "reset": true, 00:19:03.577 "nvme_admin": false, 00:19:03.577 "nvme_io": false, 00:19:03.577 "nvme_io_md": false, 00:19:03.577 "write_zeroes": true, 00:19:03.577 "zcopy": false, 00:19:03.577 "get_zone_info": false, 00:19:03.577 "zone_management": false, 00:19:03.577 "zone_append": false, 00:19:03.577 "compare": false, 00:19:03.577 "compare_and_write": false, 00:19:03.577 "abort": false, 00:19:03.577 "seek_hole": true, 00:19:03.577 "seek_data": true, 00:19:03.577 "copy": false, 00:19:03.577 "nvme_iov_md": false 00:19:03.577 }, 00:19:03.577 "driver_specific": { 00:19:03.577 "lvol": { 00:19:03.577 "lvol_store_uuid": "21bd9d3b-4dd8-4cdf-ad00-4db8ca068a76", 00:19:03.577 "base_bdev": "nvme0n1", 00:19:03.577 "thin_provision": true, 00:19:03.577 "num_allocated_clusters": 0, 00:19:03.577 "snapshot": false, 00:19:03.577 "clone": false, 00:19:03.577 "esnap_clone": false 00:19:03.577 } 00:19:03.577 } 00:19:03.577 } 00:19:03.577 ]' 00:19:03.577 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:03.577 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:03.577 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:03.577 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:03.577 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:03.577 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 8aad9101-4286-4688-93d9-294d330b94bb 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8aad9101-4286-4688-93d9-294d330b94bb 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:03.835 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8aad9101-4286-4688-93d9-294d330b94bb 00:19:04.093 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:04.093 { 00:19:04.093 "name": "8aad9101-4286-4688-93d9-294d330b94bb", 00:19:04.093 "aliases": [ 00:19:04.093 "lvs/nvme0n1p0" 00:19:04.093 ], 00:19:04.093 "product_name": "Logical Volume", 00:19:04.093 "block_size": 4096, 00:19:04.093 "num_blocks": 26476544, 00:19:04.093 "uuid": "8aad9101-4286-4688-93d9-294d330b94bb", 00:19:04.093 "assigned_rate_limits": { 00:19:04.093 "rw_ios_per_sec": 0, 00:19:04.093 "rw_mbytes_per_sec": 0, 00:19:04.093 "r_mbytes_per_sec": 0, 00:19:04.093 "w_mbytes_per_sec": 0 00:19:04.093 }, 00:19:04.093 "claimed": false, 00:19:04.093 "zoned": false, 00:19:04.093 "supported_io_types": { 00:19:04.093 "read": true, 00:19:04.093 "write": true, 00:19:04.093 "unmap": true, 00:19:04.093 "flush": false, 00:19:04.093 "reset": true, 00:19:04.093 "nvme_admin": false, 00:19:04.093 "nvme_io": false, 00:19:04.093 "nvme_io_md": false, 00:19:04.093 "write_zeroes": true, 00:19:04.093 "zcopy": false, 00:19:04.093 "get_zone_info": false, 00:19:04.093 "zone_management": false, 00:19:04.093 "zone_append": false, 00:19:04.093 "compare": false, 00:19:04.093 "compare_and_write": false, 00:19:04.093 "abort": false, 00:19:04.093 "seek_hole": true, 00:19:04.093 "seek_data": true, 00:19:04.093 "copy": false, 00:19:04.093 "nvme_iov_md": false 00:19:04.093 }, 00:19:04.093 "driver_specific": { 00:19:04.093 "lvol": { 00:19:04.093 "lvol_store_uuid": "21bd9d3b-4dd8-4cdf-ad00-4db8ca068a76", 00:19:04.093 "base_bdev": "nvme0n1", 00:19:04.093 "thin_provision": true, 00:19:04.093 "num_allocated_clusters": 0, 00:19:04.093 "snapshot": false, 00:19:04.093 "clone": false, 00:19:04.093 "esnap_clone": false 00:19:04.093 } 00:19:04.093 } 00:19:04.093 } 00:19:04.093 ]' 00:19:04.093 01:31:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:04.093 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:04.093 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 8aad9101-4286-4688-93d9-294d330b94bb 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8aad9101-4286-4688-93d9-294d330b94bb 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:04.352 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8aad9101-4286-4688-93d9-294d330b94bb 00:19:04.610 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:04.610 { 00:19:04.610 "name": "8aad9101-4286-4688-93d9-294d330b94bb", 00:19:04.610 "aliases": [ 00:19:04.610 "lvs/nvme0n1p0" 00:19:04.610 ], 00:19:04.610 "product_name": "Logical Volume", 00:19:04.610 "block_size": 4096, 00:19:04.610 "num_blocks": 26476544, 00:19:04.610 "uuid": "8aad9101-4286-4688-93d9-294d330b94bb", 00:19:04.610 "assigned_rate_limits": { 00:19:04.610 "rw_ios_per_sec": 0, 00:19:04.610 "rw_mbytes_per_sec": 0, 00:19:04.610 "r_mbytes_per_sec": 0, 00:19:04.610 "w_mbytes_per_sec": 0 00:19:04.610 }, 00:19:04.610 "claimed": false, 00:19:04.610 "zoned": false, 00:19:04.610 "supported_io_types": { 00:19:04.610 "read": true, 00:19:04.610 "write": true, 00:19:04.610 "unmap": true, 00:19:04.610 "flush": false, 00:19:04.610 "reset": true, 00:19:04.610 "nvme_admin": false, 00:19:04.610 "nvme_io": false, 00:19:04.610 "nvme_io_md": false, 00:19:04.611 "write_zeroes": true, 00:19:04.611 "zcopy": false, 00:19:04.611 "get_zone_info": false, 00:19:04.611 "zone_management": false, 00:19:04.611 "zone_append": false, 00:19:04.611 "compare": false, 00:19:04.611 "compare_and_write": false, 00:19:04.611 "abort": false, 00:19:04.611 "seek_hole": true, 00:19:04.611 "seek_data": true, 00:19:04.611 "copy": false, 00:19:04.611 "nvme_iov_md": false 00:19:04.611 }, 00:19:04.611 "driver_specific": { 00:19:04.611 "lvol": { 00:19:04.611 "lvol_store_uuid": "21bd9d3b-4dd8-4cdf-ad00-4db8ca068a76", 00:19:04.611 "base_bdev": "nvme0n1", 00:19:04.611 "thin_provision": true, 00:19:04.611 "num_allocated_clusters": 0, 00:19:04.611 "snapshot": false, 00:19:04.611 "clone": false, 00:19:04.611 "esnap_clone": false 00:19:04.611 } 00:19:04.611 } 00:19:04.611 } 00:19:04.611 ]' 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8aad9101-4286-4688-93d9-294d330b94bb --l2p_dram_limit 10' 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:04.611 01:32:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8aad9101-4286-4688-93d9-294d330b94bb --l2p_dram_limit 10 -c nvc0n1p0 00:19:04.870 [2024-09-28 01:32:00.673241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.673287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:04.870 [2024-09-28 01:32:00.673300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:04.870 [2024-09-28 01:32:00.673307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.673350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.673358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.870 [2024-09-28 01:32:00.673367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:04.870 [2024-09-28 01:32:00.673385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.673406] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:04.870 [2024-09-28 01:32:00.673992] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:04.870 [2024-09-28 01:32:00.674014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.674020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.870 [2024-09-28 01:32:00.674029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:19:04.870 [2024-09-28 01:32:00.674036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.674089] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0e3f423b-615b-429f-bbea-6f306275c43f 00:19:04.870 [2024-09-28 01:32:00.675038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.675066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:04.870 [2024-09-28 01:32:00.675073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:04.870 [2024-09-28 01:32:00.675080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.679794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.679824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.870 [2024-09-28 01:32:00.679832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.677 ms 00:19:04.870 [2024-09-28 01:32:00.679839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.679906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.679915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.870 [2024-09-28 01:32:00.679922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:04.870 [2024-09-28 01:32:00.679934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.679969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.679978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:04.870 [2024-09-28 01:32:00.679984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:04.870 [2024-09-28 01:32:00.679991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.680009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.870 [2024-09-28 01:32:00.682839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.682865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.870 [2024-09-28 01:32:00.682875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:19:04.870 [2024-09-28 01:32:00.682880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.682907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.682914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:04.870 [2024-09-28 01:32:00.682921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:04.870 [2024-09-28 01:32:00.682929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.682942] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:04.870 [2024-09-28 01:32:00.683046] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:04.870 [2024-09-28 01:32:00.683058] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:04.870 [2024-09-28 01:32:00.683066] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:04.870 [2024-09-28 01:32:00.683077] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:04.870 [2024-09-28 01:32:00.683084] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:04.870 [2024-09-28 01:32:00.683091] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:04.870 [2024-09-28 01:32:00.683097] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:04.870 [2024-09-28 01:32:00.683104] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:04.870 [2024-09-28 01:32:00.683109] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:04.870 [2024-09-28 01:32:00.683117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.683128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:04.870 [2024-09-28 01:32:00.683135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:19:04.870 [2024-09-28 01:32:00.683141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.683215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.870 [2024-09-28 01:32:00.683224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:04.870 [2024-09-28 01:32:00.683231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:04.870 [2024-09-28 01:32:00.683237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.870 [2024-09-28 01:32:00.683312] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:04.870 [2024-09-28 01:32:00.683325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:04.870 [2024-09-28 01:32:00.683333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.870 [2024-09-28 01:32:00.683339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:04.870 [2024-09-28 01:32:00.683352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:04.870 [2024-09-28 01:32:00.683365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:04.870 [2024-09-28 01:32:00.683371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.870 [2024-09-28 01:32:00.683383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:04.870 [2024-09-28 01:32:00.683388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:04.870 [2024-09-28 01:32:00.683394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.870 [2024-09-28 01:32:00.683399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:04.870 [2024-09-28 01:32:00.683406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:04.870 [2024-09-28 01:32:00.683413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:04.870 [2024-09-28 01:32:00.683426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:04.870 [2024-09-28 01:32:00.683432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:04.870 [2024-09-28 01:32:00.683444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.870 [2024-09-28 01:32:00.683456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:04.870 [2024-09-28 01:32:00.683461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.870 [2024-09-28 01:32:00.683472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:04.870 [2024-09-28 01:32:00.683478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.870 [2024-09-28 01:32:00.683489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:04.870 [2024-09-28 01:32:00.683495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.870 [2024-09-28 01:32:00.683506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:04.870 [2024-09-28 01:32:00.683513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.870 [2024-09-28 01:32:00.683524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:04.870 [2024-09-28 01:32:00.683529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:04.870 [2024-09-28 01:32:00.683535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.870 [2024-09-28 01:32:00.683540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:04.870 [2024-09-28 01:32:00.683546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:04.870 [2024-09-28 01:32:00.683551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.870 [2024-09-28 01:32:00.683558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:04.871 [2024-09-28 01:32:00.683563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:04.871 [2024-09-28 01:32:00.683569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.871 [2024-09-28 01:32:00.683574] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:04.871 [2024-09-28 01:32:00.683581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:04.871 [2024-09-28 01:32:00.683588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.871 [2024-09-28 01:32:00.683595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.871 [2024-09-28 01:32:00.683602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:04.871 [2024-09-28 01:32:00.683609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:04.871 [2024-09-28 01:32:00.683615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:04.871 [2024-09-28 01:32:00.683622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:04.871 [2024-09-28 01:32:00.683627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:04.871 [2024-09-28 01:32:00.683633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:04.871 [2024-09-28 01:32:00.683641] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:04.871 [2024-09-28 01:32:00.683650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.871 [2024-09-28 01:32:00.683656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:04.871 [2024-09-28 01:32:00.683663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:04.871 [2024-09-28 01:32:00.683669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:04.871 [2024-09-28 01:32:00.683675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:04.871 [2024-09-28 01:32:00.683681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:04.871 [2024-09-28 01:32:00.683687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:04.871 [2024-09-28 01:32:00.683692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:04.871 [2024-09-28 01:32:00.683700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:04.871 [2024-09-28 01:32:00.683705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:04.871 [2024-09-28 01:32:00.683713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:04.871 [2024-09-28 01:32:00.683718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:04.871 [2024-09-28 01:32:00.683724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:04.871 [2024-09-28 01:32:00.683730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:04.871 [2024-09-28 01:32:00.683737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:04.871 [2024-09-28 01:32:00.683742] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:04.871 [2024-09-28 01:32:00.683750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.871 [2024-09-28 01:32:00.683756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:04.871 [2024-09-28 01:32:00.683763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:04.871 [2024-09-28 01:32:00.683769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:04.871 [2024-09-28 01:32:00.683776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:04.871 [2024-09-28 01:32:00.683782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.871 [2024-09-28 01:32:00.683788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:04.871 [2024-09-28 01:32:00.683794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:19:04.871 [2024-09-28 01:32:00.683800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.871 [2024-09-28 01:32:00.683830] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:04.871 [2024-09-28 01:32:00.683839] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:06.772 [2024-09-28 01:32:02.589864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.772 [2024-09-28 01:32:02.589922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:06.772 [2024-09-28 01:32:02.589937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1906.024 ms 00:19:06.772 [2024-09-28 01:32:02.589948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.772 [2024-09-28 01:32:02.614965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.772 [2024-09-28 01:32:02.615012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:06.772 [2024-09-28 01:32:02.615024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.754 ms 00:19:06.772 [2024-09-28 01:32:02.615033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.772 [2024-09-28 01:32:02.615158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.773 [2024-09-28 01:32:02.615171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:06.773 [2024-09-28 01:32:02.615180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:06.773 [2024-09-28 01:32:02.615203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.773 [2024-09-28 01:32:02.653026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.773 [2024-09-28 01:32:02.653074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:06.773 [2024-09-28 01:32:02.653090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.769 ms 00:19:06.773 [2024-09-28 01:32:02.653102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.773 [2024-09-28 01:32:02.653143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.773 [2024-09-28 01:32:02.653155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:06.773 [2024-09-28 01:32:02.653164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:06.773 [2024-09-28 01:32:02.653180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.773 [2024-09-28 01:32:02.653534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.773 [2024-09-28 01:32:02.653555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:06.773 [2024-09-28 01:32:02.653566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:19:06.773 [2024-09-28 01:32:02.653578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.773 [2024-09-28 01:32:02.653692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.773 [2024-09-28 01:32:02.653713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:06.773 [2024-09-28 01:32:02.653722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:06.773 [2024-09-28 01:32:02.653734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.773 [2024-09-28 01:32:02.667432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.773 [2024-09-28 01:32:02.667463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:06.773 [2024-09-28 01:32:02.667473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.678 ms 00:19:06.773 [2024-09-28 01:32:02.667483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.773 [2024-09-28 01:32:02.678620] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:06.773 [2024-09-28 01:32:02.681152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.773 [2024-09-28 01:32:02.681182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:06.773 [2024-09-28 01:32:02.681206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.599 ms 00:19:06.773 [2024-09-28 01:32:02.681214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.031 [2024-09-28 01:32:02.739587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.031 [2024-09-28 01:32:02.739637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:07.031 [2024-09-28 01:32:02.739655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.344 ms 00:19:07.031 [2024-09-28 01:32:02.739663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.031 [2024-09-28 01:32:02.739847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.031 [2024-09-28 01:32:02.739857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:07.031 [2024-09-28 01:32:02.739870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:07.031 [2024-09-28 01:32:02.739878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.031 [2024-09-28 01:32:02.762861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.031 [2024-09-28 01:32:02.762899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:07.031 [2024-09-28 01:32:02.762911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.935 ms 00:19:07.031 [2024-09-28 01:32:02.762919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.031 [2024-09-28 01:32:02.785185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.031 [2024-09-28 01:32:02.785224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:07.031 [2024-09-28 01:32:02.785237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.229 ms 00:19:07.031 [2024-09-28 01:32:02.785244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.031 [2024-09-28 01:32:02.785799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.031 [2024-09-28 01:32:02.785820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:07.031 [2024-09-28 01:32:02.785830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:19:07.032 [2024-09-28 01:32:02.785837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.032 [2024-09-28 01:32:02.850901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.032 [2024-09-28 01:32:02.850940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:07.032 [2024-09-28 01:32:02.850957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.029 ms 00:19:07.032 [2024-09-28 01:32:02.850967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.032 [2024-09-28 01:32:02.874711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.032 [2024-09-28 01:32:02.874747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:07.032 [2024-09-28 01:32:02.874760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.674 ms 00:19:07.032 [2024-09-28 01:32:02.874768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.032 [2024-09-28 01:32:02.897404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.032 [2024-09-28 01:32:02.897435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:07.032 [2024-09-28 01:32:02.897447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.599 ms 00:19:07.032 [2024-09-28 01:32:02.897454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.032 [2024-09-28 01:32:02.920127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.032 [2024-09-28 01:32:02.920162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:07.032 [2024-09-28 01:32:02.920175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.636 ms 00:19:07.032 [2024-09-28 01:32:02.920182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.032 [2024-09-28 01:32:02.920227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.032 [2024-09-28 01:32:02.920237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:07.032 [2024-09-28 01:32:02.920251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:07.032 [2024-09-28 01:32:02.920258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.032 [2024-09-28 01:32:02.920332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.032 [2024-09-28 01:32:02.920342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:07.032 [2024-09-28 01:32:02.920352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:07.032 [2024-09-28 01:32:02.920359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.032 [2024-09-28 01:32:02.921225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2247.561 ms, result 0 00:19:07.032 { 00:19:07.032 "name": "ftl0", 00:19:07.032 "uuid": "0e3f423b-615b-429f-bbea-6f306275c43f" 00:19:07.032 } 00:19:07.032 01:32:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:19:07.032 01:32:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:07.290 01:32:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:19:07.290 01:32:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:19:07.290 01:32:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:19:07.548 /dev/nbd0 00:19:07.548 01:32:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:19:07.548 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:07.548 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:19:07.548 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:07.548 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:07.548 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:07.548 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:19:07.548 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:07.549 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:07.549 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:19:07.549 1+0 records in 00:19:07.549 1+0 records out 00:19:07.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278055 s, 14.7 MB/s 00:19:07.549 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:19:07.549 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:19:07.549 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:19:07.549 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:07.549 01:32:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:19:07.549 01:32:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:19:07.549 [2024-09-28 01:32:03.441991] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:07.549 [2024-09-28 01:32:03.442101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76040 ] 00:19:07.807 [2024-09-28 01:32:03.590020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.065 [2024-09-28 01:32:03.765207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.065  Copying: 196/1024 [MB] (196 MBps) Copying: 422/1024 [MB] (226 MBps) Copying: 682/1024 [MB] (260 MBps) Copying: 939/1024 [MB] (256 MBps) Copying: 1024/1024 [MB] (average 236 MBps) 00:19:13.065 00:19:13.065 01:32:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:14.964 01:32:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:19:15.222 [2024-09-28 01:32:10.918281] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:15.222 [2024-09-28 01:32:10.918397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76128 ] 00:19:15.222 [2024-09-28 01:32:11.066628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.479 [2024-09-28 01:32:11.242291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.678  Copying: 29/1024 [MB] (29 MBps) Copying: 58/1024 [MB] (28 MBps) Copying: 86/1024 [MB] (27 MBps) Copying: 115/1024 [MB] (28 MBps) Copying: 145/1024 [MB] (29 MBps) Copying: 173/1024 [MB] (28 MBps) Copying: 202/1024 [MB] (29 MBps) Copying: 232/1024 [MB] (29 MBps) Copying: 263/1024 [MB] (31 MBps) Copying: 291/1024 [MB] (27 MBps) Copying: 322/1024 [MB] (30 MBps) Copying: 352/1024 [MB] (30 MBps) Copying: 382/1024 [MB] (29 MBps) Copying: 410/1024 [MB] (28 MBps) Copying: 440/1024 [MB] (30 MBps) Copying: 471/1024 [MB] (31 MBps) Copying: 500/1024 [MB] (28 MBps) Copying: 530/1024 [MB] (29 MBps) Copying: 559/1024 [MB] (29 MBps) Copying: 589/1024 [MB] (29 MBps) Copying: 619/1024 [MB] (29 MBps) Copying: 651/1024 [MB] (32 MBps) Copying: 680/1024 [MB] (29 MBps) Copying: 712/1024 [MB] (31 MBps) Copying: 742/1024 [MB] (30 MBps) Copying: 773/1024 [MB] (31 MBps) Copying: 804/1024 [MB] (30 MBps) Copying: 835/1024 [MB] (30 MBps) Copying: 871/1024 [MB] (36 MBps) Copying: 907/1024 [MB] (36 MBps) Copying: 944/1024 [MB] (36 MBps) Copying: 980/1024 [MB] (36 MBps) Copying: 1016/1024 [MB] (36 MBps) Copying: 1024/1024 [MB] (average 30 MBps) 00:19:49.678 00:19:49.678 01:32:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:19:49.678 01:32:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:19:49.678 01:32:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:49.937 [2024-09-28 01:32:45.686537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.937 [2024-09-28 01:32:45.686583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:49.937 [2024-09-28 01:32:45.686595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:49.937 [2024-09-28 01:32:45.686603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.937 [2024-09-28 01:32:45.686622] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:49.937 [2024-09-28 01:32:45.688667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.937 [2024-09-28 01:32:45.688694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:49.937 [2024-09-28 01:32:45.688704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.031 ms 00:19:49.937 [2024-09-28 01:32:45.688711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.937 [2024-09-28 01:32:45.690547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.937 [2024-09-28 01:32:45.690576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:49.937 [2024-09-28 01:32:45.690585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.811 ms 00:19:49.937 [2024-09-28 01:32:45.690591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.937 [2024-09-28 01:32:45.702568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.937 [2024-09-28 01:32:45.702596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:49.937 [2024-09-28 01:32:45.702607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.960 ms 00:19:49.937 [2024-09-28 01:32:45.702613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.937 [2024-09-28 01:32:45.707352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.937 [2024-09-28 01:32:45.707375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:49.937 [2024-09-28 01:32:45.707386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.710 ms 00:19:49.937 [2024-09-28 01:32:45.707393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.937 [2024-09-28 01:32:45.725713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.937 [2024-09-28 01:32:45.725738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:49.937 [2024-09-28 01:32:45.725747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.275 ms 00:19:49.937 [2024-09-28 01:32:45.725754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.937 [2024-09-28 01:32:45.737720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.937 [2024-09-28 01:32:45.737746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:49.937 [2024-09-28 01:32:45.737756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.935 ms 00:19:49.937 [2024-09-28 01:32:45.737763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.937 [2024-09-28 01:32:45.737877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.937 [2024-09-28 01:32:45.737885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:49.937 [2024-09-28 01:32:45.737896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:49.937 [2024-09-28 01:32:45.737902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.937 [2024-09-28 01:32:45.755232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.937 [2024-09-28 01:32:45.755258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:49.937 [2024-09-28 01:32:45.755273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.314 ms 00:19:49.938 [2024-09-28 01:32:45.755279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.938 [2024-09-28 01:32:45.772605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.938 [2024-09-28 01:32:45.772641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:49.938 [2024-09-28 01:32:45.772650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.297 ms 00:19:49.938 [2024-09-28 01:32:45.772656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.938 [2024-09-28 01:32:45.789327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.938 [2024-09-28 01:32:45.789351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:49.938 [2024-09-28 01:32:45.789360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.640 ms 00:19:49.938 [2024-09-28 01:32:45.789365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.938 [2024-09-28 01:32:45.806256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.938 [2024-09-28 01:32:45.806281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:49.938 [2024-09-28 01:32:45.806290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.836 ms 00:19:49.938 [2024-09-28 01:32:45.806296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.938 [2024-09-28 01:32:45.806323] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:49.938 [2024-09-28 01:32:45.806334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:49.938 [2024-09-28 01:32:45.806837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.806990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:49.939 [2024-09-28 01:32:45.807002] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:49.939 [2024-09-28 01:32:45.807012] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0e3f423b-615b-429f-bbea-6f306275c43f 00:19:49.939 [2024-09-28 01:32:45.807017] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:49.939 [2024-09-28 01:32:45.807025] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:49.939 [2024-09-28 01:32:45.807031] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:49.939 [2024-09-28 01:32:45.807038] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:49.939 [2024-09-28 01:32:45.807043] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:49.939 [2024-09-28 01:32:45.807050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:49.939 [2024-09-28 01:32:45.807055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:49.939 [2024-09-28 01:32:45.807061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:49.939 [2024-09-28 01:32:45.807066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:49.939 [2024-09-28 01:32:45.807073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.939 [2024-09-28 01:32:45.807078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:49.939 [2024-09-28 01:32:45.807086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:19:49.939 [2024-09-28 01:32:45.807092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.939 [2024-09-28 01:32:45.816642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.939 [2024-09-28 01:32:45.816665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:49.939 [2024-09-28 01:32:45.816674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.524 ms 00:19:49.939 [2024-09-28 01:32:45.816679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.939 [2024-09-28 01:32:45.816943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.939 [2024-09-28 01:32:45.816955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:49.939 [2024-09-28 01:32:45.816963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:19:49.939 [2024-09-28 01:32:45.816969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.939 [2024-09-28 01:32:45.845739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.939 [2024-09-28 01:32:45.845765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.939 [2024-09-28 01:32:45.845775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.939 [2024-09-28 01:32:45.845781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.939 [2024-09-28 01:32:45.845829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.939 [2024-09-28 01:32:45.845835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.939 [2024-09-28 01:32:45.845842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.939 [2024-09-28 01:32:45.845849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.939 [2024-09-28 01:32:45.845927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.939 [2024-09-28 01:32:45.845934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.939 [2024-09-28 01:32:45.845943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.939 [2024-09-28 01:32:45.845948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.939 [2024-09-28 01:32:45.845964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.939 [2024-09-28 01:32:45.845970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.939 [2024-09-28 01:32:45.845977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.939 [2024-09-28 01:32:45.845982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.196 [2024-09-28 01:32:45.905369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.196 [2024-09-28 01:32:45.905408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.196 [2024-09-28 01:32:45.905419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.196 [2024-09-28 01:32:45.905426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.196 [2024-09-28 01:32:45.953152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.196 [2024-09-28 01:32:45.953189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.196 [2024-09-28 01:32:45.953206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.197 [2024-09-28 01:32:45.953214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-09-28 01:32:45.953276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.197 [2024-09-28 01:32:45.953284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.197 [2024-09-28 01:32:45.953292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.197 [2024-09-28 01:32:45.953297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-09-28 01:32:45.953349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.197 [2024-09-28 01:32:45.953357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.197 [2024-09-28 01:32:45.953365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.197 [2024-09-28 01:32:45.953371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-09-28 01:32:45.953444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.197 [2024-09-28 01:32:45.953451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.197 [2024-09-28 01:32:45.953459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.197 [2024-09-28 01:32:45.953466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-09-28 01:32:45.953491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.197 [2024-09-28 01:32:45.953498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.197 [2024-09-28 01:32:45.953505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.197 [2024-09-28 01:32:45.953510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-09-28 01:32:45.953541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.197 [2024-09-28 01:32:45.953548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.197 [2024-09-28 01:32:45.953555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.197 [2024-09-28 01:32:45.953560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-09-28 01:32:45.953597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.197 [2024-09-28 01:32:45.953604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.197 [2024-09-28 01:32:45.953611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.197 [2024-09-28 01:32:45.953617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-09-28 01:32:45.953718] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 267.155 ms, result 0 00:19:50.197 true 00:19:50.197 01:32:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 75919 00:19:50.197 01:32:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75919 00:19:50.197 01:32:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:19:50.197 [2024-09-28 01:32:46.043023] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:50.197 [2024-09-28 01:32:46.043137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76496 ] 00:19:50.455 [2024-09-28 01:32:46.191200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.455 [2024-09-28 01:32:46.333886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.342  Copying: 253/1024 [MB] (253 MBps) Copying: 508/1024 [MB] (254 MBps) Copying: 761/1024 [MB] (252 MBps) Copying: 1013/1024 [MB] (251 MBps) Copying: 1024/1024 [MB] (average 253 MBps) 00:19:55.342 00:19:55.342 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75919 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:19:55.342 01:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.342 [2024-09-28 01:32:51.256404] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:55.342 [2024-09-28 01:32:51.256518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76555 ] 00:19:55.599 [2024-09-28 01:32:51.403672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.857 [2024-09-28 01:32:51.546317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.857 [2024-09-28 01:32:51.751213] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.857 [2024-09-28 01:32:51.751256] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:56.116 [2024-09-28 01:32:51.815037] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:56.116 [2024-09-28 01:32:51.815310] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:56.116 [2024-09-28 01:32:51.815514] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:56.116 [2024-09-28 01:32:51.998277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.116 [2024-09-28 01:32:51.998313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:56.116 [2024-09-28 01:32:51.998323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:56.116 [2024-09-28 01:32:51.998329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.116 [2024-09-28 01:32:51.998364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.116 [2024-09-28 01:32:51.998372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.116 [2024-09-28 01:32:51.998378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:56.116 [2024-09-28 01:32:51.998386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.116 [2024-09-28 01:32:51.998398] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:56.116 [2024-09-28 01:32:51.998933] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:56.116 [2024-09-28 01:32:51.998946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.116 [2024-09-28 01:32:51.998952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.116 [2024-09-28 01:32:51.998958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:19:56.116 [2024-09-28 01:32:51.998964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.116 [2024-09-28 01:32:51.999971] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:56.116 [2024-09-28 01:32:52.009497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.116 [2024-09-28 01:32:52.009521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:56.116 [2024-09-28 01:32:52.009529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.526 ms 00:19:56.116 [2024-09-28 01:32:52.009536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.116 [2024-09-28 01:32:52.009576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.116 [2024-09-28 01:32:52.009585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:56.116 [2024-09-28 01:32:52.009592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:56.116 [2024-09-28 01:32:52.009598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.116 [2024-09-28 01:32:52.013863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.116 [2024-09-28 01:32:52.013883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.116 [2024-09-28 01:32:52.013890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.227 ms 00:19:56.116 [2024-09-28 01:32:52.013896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.116 [2024-09-28 01:32:52.013950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.116 [2024-09-28 01:32:52.013957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.116 [2024-09-28 01:32:52.013963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:56.116 [2024-09-28 01:32:52.013969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.116 [2024-09-28 01:32:52.014006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.116 [2024-09-28 01:32:52.014013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:56.116 [2024-09-28 01:32:52.014019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:56.116 [2024-09-28 01:32:52.014024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.116 [2024-09-28 01:32:52.014037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.116 [2024-09-28 01:32:52.016623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.116 [2024-09-28 01:32:52.016642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.116 [2024-09-28 01:32:52.016650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:19:56.116 [2024-09-28 01:32:52.016655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.116 [2024-09-28 01:32:52.016682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.116 [2024-09-28 01:32:52.016689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:56.116 [2024-09-28 01:32:52.016695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:56.116 [2024-09-28 01:32:52.016701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.116 [2024-09-28 01:32:52.016716] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:56.116 [2024-09-28 01:32:52.016731] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:56.116 [2024-09-28 01:32:52.016757] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:56.116 [2024-09-28 01:32:52.016769] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:56.116 [2024-09-28 01:32:52.016853] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:56.116 [2024-09-28 01:32:52.016862] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:56.116 [2024-09-28 01:32:52.016870] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:56.116 [2024-09-28 01:32:52.016878] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:56.116 [2024-09-28 01:32:52.016884] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:56.116 [2024-09-28 01:32:52.016890] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:56.116 [2024-09-28 01:32:52.016895] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:56.116 [2024-09-28 01:32:52.016901] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:56.116 [2024-09-28 01:32:52.016906] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:56.117 [2024-09-28 01:32:52.016912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.117 [2024-09-28 01:32:52.016920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:56.117 [2024-09-28 01:32:52.016926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:19:56.117 [2024-09-28 01:32:52.016931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.117 [2024-09-28 01:32:52.016994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.117 [2024-09-28 01:32:52.017000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:56.117 [2024-09-28 01:32:52.017006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:56.117 [2024-09-28 01:32:52.017011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.117 [2024-09-28 01:32:52.017086] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:56.117 [2024-09-28 01:32:52.017094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:56.117 [2024-09-28 01:32:52.017102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.117 [2024-09-28 01:32:52.017108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:56.117 [2024-09-28 01:32:52.017119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:56.117 [2024-09-28 01:32:52.017131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:56.117 [2024-09-28 01:32:52.017137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.117 [2024-09-28 01:32:52.017151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:56.117 [2024-09-28 01:32:52.017156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:56.117 [2024-09-28 01:32:52.017163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.117 [2024-09-28 01:32:52.017169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:56.117 [2024-09-28 01:32:52.017174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:56.117 [2024-09-28 01:32:52.017180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:56.117 [2024-09-28 01:32:52.017199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:56.117 [2024-09-28 01:32:52.017205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:56.117 [2024-09-28 01:32:52.017216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.117 [2024-09-28 01:32:52.017225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:56.117 [2024-09-28 01:32:52.017231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.117 [2024-09-28 01:32:52.017241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:56.117 [2024-09-28 01:32:52.017246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.117 [2024-09-28 01:32:52.017256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:56.117 [2024-09-28 01:32:52.017260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.117 [2024-09-28 01:32:52.017271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:56.117 [2024-09-28 01:32:52.017276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.117 [2024-09-28 01:32:52.017286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:56.117 [2024-09-28 01:32:52.017291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:56.117 [2024-09-28 01:32:52.017295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.117 [2024-09-28 01:32:52.017300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:56.117 [2024-09-28 01:32:52.017305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:56.117 [2024-09-28 01:32:52.017310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:56.117 [2024-09-28 01:32:52.017321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:56.117 [2024-09-28 01:32:52.017326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017331] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:56.117 [2024-09-28 01:32:52.017338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:56.117 [2024-09-28 01:32:52.017344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.117 [2024-09-28 01:32:52.017349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.117 [2024-09-28 01:32:52.017355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:56.117 [2024-09-28 01:32:52.017360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:56.117 [2024-09-28 01:32:52.017366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:56.117 [2024-09-28 01:32:52.017371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:56.117 [2024-09-28 01:32:52.017375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:56.117 [2024-09-28 01:32:52.017380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:56.117 [2024-09-28 01:32:52.017386] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:56.117 [2024-09-28 01:32:52.017393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.117 [2024-09-28 01:32:52.017399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:56.117 [2024-09-28 01:32:52.017404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:56.117 [2024-09-28 01:32:52.017410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:56.117 [2024-09-28 01:32:52.017415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:56.117 [2024-09-28 01:32:52.017420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:56.117 [2024-09-28 01:32:52.017426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:56.117 [2024-09-28 01:32:52.017431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:56.117 [2024-09-28 01:32:52.017436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:56.117 [2024-09-28 01:32:52.017441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:56.117 [2024-09-28 01:32:52.017447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:56.117 [2024-09-28 01:32:52.017452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:56.117 [2024-09-28 01:32:52.017457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:56.117 [2024-09-28 01:32:52.017462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:56.117 [2024-09-28 01:32:52.017467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:56.117 [2024-09-28 01:32:52.017473] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:56.117 [2024-09-28 01:32:52.017479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:56.117 [2024-09-28 01:32:52.017487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:56.117 [2024-09-28 01:32:52.017492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:56.117 [2024-09-28 01:32:52.017498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:56.117 [2024-09-28 01:32:52.017503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:56.117 [2024-09-28 01:32:52.017509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.117 [2024-09-28 01:32:52.017518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:56.117 [2024-09-28 01:32:52.017524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:19:56.117 [2024-09-28 01:32:52.017530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.375 [2024-09-28 01:32:52.053186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.375 [2024-09-28 01:32:52.053244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.375 [2024-09-28 01:32:52.053260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.622 ms 00:19:56.375 [2024-09-28 01:32:52.053270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.375 [2024-09-28 01:32:52.053379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.375 [2024-09-28 01:32:52.053391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:56.375 [2024-09-28 01:32:52.053401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:56.375 [2024-09-28 01:32:52.053410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.375 [2024-09-28 01:32:52.077381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.375 [2024-09-28 01:32:52.077404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.375 [2024-09-28 01:32:52.077412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.901 ms 00:19:56.375 [2024-09-28 01:32:52.077418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.375 [2024-09-28 01:32:52.077442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.375 [2024-09-28 01:32:52.077449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.375 [2024-09-28 01:32:52.077455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:56.375 [2024-09-28 01:32:52.077460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.375 [2024-09-28 01:32:52.077756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.375 [2024-09-28 01:32:52.077775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.375 [2024-09-28 01:32:52.077782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:19:56.375 [2024-09-28 01:32:52.077788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.375 [2024-09-28 01:32:52.077887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.375 [2024-09-28 01:32:52.077893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.375 [2024-09-28 01:32:52.077900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:56.375 [2024-09-28 01:32:52.077905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.375 [2024-09-28 01:32:52.087779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.087801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.376 [2024-09-28 01:32:52.087808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.859 ms 00:19:56.376 [2024-09-28 01:32:52.087814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.097398] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:56.376 [2024-09-28 01:32:52.097424] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:56.376 [2024-09-28 01:32:52.097433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.097439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:56.376 [2024-09-28 01:32:52.097447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.548 ms 00:19:56.376 [2024-09-28 01:32:52.097452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.115935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.115958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:56.376 [2024-09-28 01:32:52.115970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.453 ms 00:19:56.376 [2024-09-28 01:32:52.115977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.124750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.124771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:56.376 [2024-09-28 01:32:52.124778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.745 ms 00:19:56.376 [2024-09-28 01:32:52.124784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.133411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.133432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:56.376 [2024-09-28 01:32:52.133439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.602 ms 00:19:56.376 [2024-09-28 01:32:52.133444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.133898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.133912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:56.376 [2024-09-28 01:32:52.133919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:19:56.376 [2024-09-28 01:32:52.133925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.176613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.176652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:56.376 [2024-09-28 01:32:52.176661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.672 ms 00:19:56.376 [2024-09-28 01:32:52.176667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.184392] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:56.376 [2024-09-28 01:32:52.186185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.186213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:56.376 [2024-09-28 01:32:52.186222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.472 ms 00:19:56.376 [2024-09-28 01:32:52.186229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.186291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.186300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:56.376 [2024-09-28 01:32:52.186308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:56.376 [2024-09-28 01:32:52.186315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.186363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.186371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:56.376 [2024-09-28 01:32:52.186379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:56.376 [2024-09-28 01:32:52.186385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.186401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.186408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:56.376 [2024-09-28 01:32:52.186415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:56.376 [2024-09-28 01:32:52.186422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.186449] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:56.376 [2024-09-28 01:32:52.186457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.186465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:56.376 [2024-09-28 01:32:52.186472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:56.376 [2024-09-28 01:32:52.186478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.204000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.204023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:56.376 [2024-09-28 01:32:52.204032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.508 ms 00:19:56.376 [2024-09-28 01:32:52.204039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.204096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.376 [2024-09-28 01:32:52.204104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:56.376 [2024-09-28 01:32:52.204111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:56.376 [2024-09-28 01:32:52.204116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.376 [2024-09-28 01:32:52.205364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 206.726 ms, result 0 00:20:21.869  Copying: 52/1024 [MB] (52 MBps) Copying: 99/1024 [MB] (46 MBps) Copying: 146/1024 [MB] (47 MBps) Copying: 192/1024 [MB] (46 MBps) Copying: 239/1024 [MB] (46 MBps) Copying: 286/1024 [MB] (46 MBps) Copying: 333/1024 [MB] (46 MBps) Copying: 384/1024 [MB] (51 MBps) Copying: 441/1024 [MB] (56 MBps) Copying: 475/1024 [MB] (33 MBps) Copying: 494/1024 [MB] (19 MBps) Copying: 515/1024 [MB] (20 MBps) Copying: 535/1024 [MB] (20 MBps) Copying: 555/1024 [MB] (20 MBps) Copying: 580/1024 [MB] (24 MBps) Copying: 627/1024 [MB] (47 MBps) Copying: 674/1024 [MB] (46 MBps) Copying: 721/1024 [MB] (47 MBps) Copying: 768/1024 [MB] (46 MBps) Copying: 814/1024 [MB] (46 MBps) Copying: 860/1024 [MB] (46 MBps) Copying: 906/1024 [MB] (45 MBps) Copying: 952/1024 [MB] (46 MBps) Copying: 999/1024 [MB] (46 MBps) Copying: 1023/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 40 MBps)[2024-09-28 01:33:17.701240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.869 [2024-09-28 01:33:17.701384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:21.869 [2024-09-28 01:33:17.701447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:21.869 [2024-09-28 01:33:17.701471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.869 [2024-09-28 01:33:17.702495] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:21.869 [2024-09-28 01:33:17.707186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.869 [2024-09-28 01:33:17.707297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:21.869 [2024-09-28 01:33:17.707365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.518 ms 00:20:21.869 [2024-09-28 01:33:17.707389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.869 [2024-09-28 01:33:17.719378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.869 [2024-09-28 01:33:17.719490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:21.869 [2024-09-28 01:33:17.719548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.724 ms 00:20:21.869 [2024-09-28 01:33:17.719571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.869 [2024-09-28 01:33:17.737554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.869 [2024-09-28 01:33:17.737664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:21.869 [2024-09-28 01:33:17.737724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.954 ms 00:20:21.869 [2024-09-28 01:33:17.737746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.869 [2024-09-28 01:33:17.743939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.869 [2024-09-28 01:33:17.744033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:21.869 [2024-09-28 01:33:17.744087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:20:21.869 [2024-09-28 01:33:17.744111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.869 [2024-09-28 01:33:17.767251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.869 [2024-09-28 01:33:17.767356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:21.869 [2024-09-28 01:33:17.767407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.077 ms 00:20:21.869 [2024-09-28 01:33:17.767428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.869 [2024-09-28 01:33:17.780748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.869 [2024-09-28 01:33:17.780856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:21.869 [2024-09-28 01:33:17.780914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.228 ms 00:20:21.869 [2024-09-28 01:33:17.780958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.129 [2024-09-28 01:33:17.833071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.129 [2024-09-28 01:33:17.833175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:22.129 [2024-09-28 01:33:17.833241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.069 ms 00:20:22.129 [2024-09-28 01:33:17.833265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.129 [2024-09-28 01:33:17.856446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.129 [2024-09-28 01:33:17.856553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:22.129 [2024-09-28 01:33:17.856602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.132 ms 00:20:22.129 [2024-09-28 01:33:17.856624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.129 [2024-09-28 01:33:17.879278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.129 [2024-09-28 01:33:17.879382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:22.130 [2024-09-28 01:33:17.879428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.606 ms 00:20:22.130 [2024-09-28 01:33:17.879449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.130 [2024-09-28 01:33:17.901736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.130 [2024-09-28 01:33:17.901830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:22.130 [2024-09-28 01:33:17.901877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.250 ms 00:20:22.130 [2024-09-28 01:33:17.901898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.130 [2024-09-28 01:33:17.924219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.130 [2024-09-28 01:33:17.924316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:22.130 [2024-09-28 01:33:17.924365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.262 ms 00:20:22.130 [2024-09-28 01:33:17.924386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.130 [2024-09-28 01:33:17.924438] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:22.130 [2024-09-28 01:33:17.924467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128000 / 261120 wr_cnt: 1 state: open 00:20:22.130 [2024-09-28 01:33:17.924523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.924637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.924667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.924696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.924724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.924791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.924830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.924859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.924888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.924949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.924979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.925979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:22.130 [2024-09-28 01:33:17.926921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.926928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.926935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.926943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.926950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.926957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.926965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.926972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.926979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.926986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.926993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.927000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.927008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.927015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.927022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.927029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.927036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.927044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.927052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.927059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:22.131 [2024-09-28 01:33:17.927075] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:22.131 [2024-09-28 01:33:17.927083] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0e3f423b-615b-429f-bbea-6f306275c43f 00:20:22.131 [2024-09-28 01:33:17.927090] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128000 00:20:22.131 [2024-09-28 01:33:17.927097] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128960 00:20:22.131 [2024-09-28 01:33:17.927104] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128000 00:20:22.131 [2024-09-28 01:33:17.927112] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:20:22.131 [2024-09-28 01:33:17.927118] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:22.131 [2024-09-28 01:33:17.927126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:22.131 [2024-09-28 01:33:17.927140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:22.131 [2024-09-28 01:33:17.927147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:22.131 [2024-09-28 01:33:17.927153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:22.131 [2024-09-28 01:33:17.927161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.131 [2024-09-28 01:33:17.927171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:22.131 [2024-09-28 01:33:17.927179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.723 ms 00:20:22.131 [2024-09-28 01:33:17.927186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.131 [2024-09-28 01:33:17.939518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.131 [2024-09-28 01:33:17.939611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:22.131 [2024-09-28 01:33:17.939659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.304 ms 00:20:22.131 [2024-09-28 01:33:17.939702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.131 [2024-09-28 01:33:17.940065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.131 [2024-09-28 01:33:17.940137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:22.131 [2024-09-28 01:33:17.940190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:20:22.131 [2024-09-28 01:33:17.940223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.131 [2024-09-28 01:33:17.968020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.131 [2024-09-28 01:33:17.968127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:22.131 [2024-09-28 01:33:17.968179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.131 [2024-09-28 01:33:17.968271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.131 [2024-09-28 01:33:17.968347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.131 [2024-09-28 01:33:17.968446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:22.131 [2024-09-28 01:33:17.968470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.131 [2024-09-28 01:33:17.968489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.131 [2024-09-28 01:33:17.968555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.131 [2024-09-28 01:33:17.968585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:22.131 [2024-09-28 01:33:17.968606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.131 [2024-09-28 01:33:17.968625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.131 [2024-09-28 01:33:17.968691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.131 [2024-09-28 01:33:17.968718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:22.131 [2024-09-28 01:33:17.968738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.131 [2024-09-28 01:33:17.968756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.131 [2024-09-28 01:33:18.044877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.131 [2024-09-28 01:33:18.045048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:22.131 [2024-09-28 01:33:18.045097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.131 [2024-09-28 01:33:18.045119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.389 [2024-09-28 01:33:18.108044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.389 [2024-09-28 01:33:18.108249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:22.389 [2024-09-28 01:33:18.108298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.389 [2024-09-28 01:33:18.108320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.389 [2024-09-28 01:33:18.108399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.389 [2024-09-28 01:33:18.108422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:22.389 [2024-09-28 01:33:18.108441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.389 [2024-09-28 01:33:18.108460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.389 [2024-09-28 01:33:18.108503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.389 [2024-09-28 01:33:18.108530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:22.389 [2024-09-28 01:33:18.108604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.389 [2024-09-28 01:33:18.108626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.389 [2024-09-28 01:33:18.108727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.389 [2024-09-28 01:33:18.108751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:22.389 [2024-09-28 01:33:18.108770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.389 [2024-09-28 01:33:18.108788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.389 [2024-09-28 01:33:18.108851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.389 [2024-09-28 01:33:18.108876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:22.389 [2024-09-28 01:33:18.108900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.389 [2024-09-28 01:33:18.108954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.389 [2024-09-28 01:33:18.108989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.389 [2024-09-28 01:33:18.108999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:22.389 [2024-09-28 01:33:18.109007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.389 [2024-09-28 01:33:18.109014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.389 [2024-09-28 01:33:18.109052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:22.389 [2024-09-28 01:33:18.109064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:22.389 [2024-09-28 01:33:18.109072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:22.389 [2024-09-28 01:33:18.109079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.389 [2024-09-28 01:33:18.109184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 410.733 ms, result 0 00:20:24.919 00:20:24.919 00:20:24.919 01:33:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:20:26.817 01:33:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:26.817 [2024-09-28 01:33:22.466966] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:26.817 [2024-09-28 01:33:22.467240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76875 ] 00:20:26.817 [2024-09-28 01:33:22.612861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.076 [2024-09-28 01:33:22.792158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.336 [2024-09-28 01:33:23.042590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.336 [2024-09-28 01:33:23.042648] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.336 [2024-09-28 01:33:23.197548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.197593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:27.336 [2024-09-28 01:33:23.197606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:27.336 [2024-09-28 01:33:23.197618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.197660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.197669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.336 [2024-09-28 01:33:23.197678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:27.336 [2024-09-28 01:33:23.197685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.197701] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:27.336 [2024-09-28 01:33:23.198346] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:27.336 [2024-09-28 01:33:23.198367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.198375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.336 [2024-09-28 01:33:23.198384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:20:27.336 [2024-09-28 01:33:23.198391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.199539] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:27.336 [2024-09-28 01:33:23.211767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.211799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:27.336 [2024-09-28 01:33:23.211810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.230 ms 00:20:27.336 [2024-09-28 01:33:23.211818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.211867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.211877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:27.336 [2024-09-28 01:33:23.211885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:27.336 [2024-09-28 01:33:23.211892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.216490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.216519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.336 [2024-09-28 01:33:23.216528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.542 ms 00:20:27.336 [2024-09-28 01:33:23.216535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.216608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.216620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.336 [2024-09-28 01:33:23.216628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:27.336 [2024-09-28 01:33:23.216635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.216686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.216696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:27.336 [2024-09-28 01:33:23.216704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:27.336 [2024-09-28 01:33:23.216711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.216734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:27.336 [2024-09-28 01:33:23.220109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.220136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.336 [2024-09-28 01:33:23.220145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.383 ms 00:20:27.336 [2024-09-28 01:33:23.220152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.220179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.220187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:27.336 [2024-09-28 01:33:23.220205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:27.336 [2024-09-28 01:33:23.220213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.220234] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:27.336 [2024-09-28 01:33:23.220250] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:27.336 [2024-09-28 01:33:23.220282] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:27.336 [2024-09-28 01:33:23.220297] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:27.336 [2024-09-28 01:33:23.220399] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:27.336 [2024-09-28 01:33:23.220420] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:27.336 [2024-09-28 01:33:23.220431] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:27.336 [2024-09-28 01:33:23.220444] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:27.336 [2024-09-28 01:33:23.220453] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:27.336 [2024-09-28 01:33:23.220461] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:27.336 [2024-09-28 01:33:23.220468] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:27.336 [2024-09-28 01:33:23.220475] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:27.336 [2024-09-28 01:33:23.220482] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:27.336 [2024-09-28 01:33:23.220489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.220496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:27.336 [2024-09-28 01:33:23.220504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:20:27.336 [2024-09-28 01:33:23.220511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.220592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.336 [2024-09-28 01:33:23.220602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:27.336 [2024-09-28 01:33:23.220610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:27.336 [2024-09-28 01:33:23.220617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.336 [2024-09-28 01:33:23.220728] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:27.336 [2024-09-28 01:33:23.220743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:27.336 [2024-09-28 01:33:23.220752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:27.336 [2024-09-28 01:33:23.220759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.336 [2024-09-28 01:33:23.220768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:27.336 [2024-09-28 01:33:23.220774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:27.336 [2024-09-28 01:33:23.220781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:27.336 [2024-09-28 01:33:23.220787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:27.336 [2024-09-28 01:33:23.220794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:27.336 [2024-09-28 01:33:23.220800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:27.336 [2024-09-28 01:33:23.220806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:27.336 [2024-09-28 01:33:23.220813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:27.336 [2024-09-28 01:33:23.220834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:27.336 [2024-09-28 01:33:23.220846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:27.336 [2024-09-28 01:33:23.220853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:27.336 [2024-09-28 01:33:23.220860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.336 [2024-09-28 01:33:23.220867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:27.336 [2024-09-28 01:33:23.220874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:27.336 [2024-09-28 01:33:23.220880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.336 [2024-09-28 01:33:23.220887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:27.336 [2024-09-28 01:33:23.220893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:27.336 [2024-09-28 01:33:23.220900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.336 [2024-09-28 01:33:23.220907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:27.336 [2024-09-28 01:33:23.220913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:27.336 [2024-09-28 01:33:23.220919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.336 [2024-09-28 01:33:23.220926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:27.336 [2024-09-28 01:33:23.220932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:27.337 [2024-09-28 01:33:23.220938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.337 [2024-09-28 01:33:23.220944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:27.337 [2024-09-28 01:33:23.220950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:27.337 [2024-09-28 01:33:23.220957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.337 [2024-09-28 01:33:23.220963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:27.337 [2024-09-28 01:33:23.220970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:27.337 [2024-09-28 01:33:23.220976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:27.337 [2024-09-28 01:33:23.220982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:27.337 [2024-09-28 01:33:23.220989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:27.337 [2024-09-28 01:33:23.220995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:27.337 [2024-09-28 01:33:23.221001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:27.337 [2024-09-28 01:33:23.221008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:27.337 [2024-09-28 01:33:23.221014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.337 [2024-09-28 01:33:23.221020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:27.337 [2024-09-28 01:33:23.221026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:27.337 [2024-09-28 01:33:23.221033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.337 [2024-09-28 01:33:23.221039] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:27.337 [2024-09-28 01:33:23.221046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:27.337 [2024-09-28 01:33:23.221055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:27.337 [2024-09-28 01:33:23.221062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.337 [2024-09-28 01:33:23.221071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:27.337 [2024-09-28 01:33:23.221078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:27.337 [2024-09-28 01:33:23.221085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:27.337 [2024-09-28 01:33:23.221091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:27.337 [2024-09-28 01:33:23.221097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:27.337 [2024-09-28 01:33:23.221103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:27.337 [2024-09-28 01:33:23.221111] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:27.337 [2024-09-28 01:33:23.221120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:27.337 [2024-09-28 01:33:23.221128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:27.337 [2024-09-28 01:33:23.221136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:27.337 [2024-09-28 01:33:23.221143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:27.337 [2024-09-28 01:33:23.221149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:27.337 [2024-09-28 01:33:23.221156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:27.337 [2024-09-28 01:33:23.221163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:27.337 [2024-09-28 01:33:23.221170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:27.337 [2024-09-28 01:33:23.221176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:27.337 [2024-09-28 01:33:23.221184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:27.337 [2024-09-28 01:33:23.221206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:27.337 [2024-09-28 01:33:23.221214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:27.337 [2024-09-28 01:33:23.221221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:27.337 [2024-09-28 01:33:23.221228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:27.337 [2024-09-28 01:33:23.221235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:27.337 [2024-09-28 01:33:23.221242] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:27.337 [2024-09-28 01:33:23.221250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:27.337 [2024-09-28 01:33:23.221258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:27.337 [2024-09-28 01:33:23.221265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:27.337 [2024-09-28 01:33:23.221272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:27.337 [2024-09-28 01:33:23.221279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:27.337 [2024-09-28 01:33:23.221286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.337 [2024-09-28 01:33:23.221294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:27.337 [2024-09-28 01:33:23.221301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:20:27.337 [2024-09-28 01:33:23.221308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.337 [2024-09-28 01:33:23.261991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.337 [2024-09-28 01:33:23.262033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.337 [2024-09-28 01:33:23.262045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.638 ms 00:20:27.337 [2024-09-28 01:33:23.262053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.337 [2024-09-28 01:33:23.262142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.337 [2024-09-28 01:33:23.262152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:27.337 [2024-09-28 01:33:23.262160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:27.337 [2024-09-28 01:33:23.262167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.596 [2024-09-28 01:33:23.292190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.596 [2024-09-28 01:33:23.292230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.596 [2024-09-28 01:33:23.292243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.957 ms 00:20:27.596 [2024-09-28 01:33:23.292251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.596 [2024-09-28 01:33:23.292280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.596 [2024-09-28 01:33:23.292288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.596 [2024-09-28 01:33:23.292296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:27.596 [2024-09-28 01:33:23.292303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.596 [2024-09-28 01:33:23.292633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.596 [2024-09-28 01:33:23.292656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.596 [2024-09-28 01:33:23.292665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:20:27.596 [2024-09-28 01:33:23.292676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.596 [2024-09-28 01:33:23.292796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.596 [2024-09-28 01:33:23.292810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.596 [2024-09-28 01:33:23.292833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:27.596 [2024-09-28 01:33:23.292841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.596 [2024-09-28 01:33:23.305090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.305118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.597 [2024-09-28 01:33:23.305128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.229 ms 00:20:27.597 [2024-09-28 01:33:23.305135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.317251] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:27.597 [2024-09-28 01:33:23.317293] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:27.597 [2024-09-28 01:33:23.317303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.317311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:27.597 [2024-09-28 01:33:23.317319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.067 ms 00:20:27.597 [2024-09-28 01:33:23.317326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.341333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.341367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:27.597 [2024-09-28 01:33:23.341377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.971 ms 00:20:27.597 [2024-09-28 01:33:23.341385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.352940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.352969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:27.597 [2024-09-28 01:33:23.352978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.516 ms 00:20:27.597 [2024-09-28 01:33:23.352985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.364153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.364183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:27.597 [2024-09-28 01:33:23.364200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.138 ms 00:20:27.597 [2024-09-28 01:33:23.364207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.364791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.364831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:27.597 [2024-09-28 01:33:23.364841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:20:27.597 [2024-09-28 01:33:23.364848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.419015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.419064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:27.597 [2024-09-28 01:33:23.419077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.151 ms 00:20:27.597 [2024-09-28 01:33:23.419085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.429227] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:27.597 [2024-09-28 01:33:23.431361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.431391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:27.597 [2024-09-28 01:33:23.431402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.229 ms 00:20:27.597 [2024-09-28 01:33:23.431414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.431494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.431505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:27.597 [2024-09-28 01:33:23.431515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:27.597 [2024-09-28 01:33:23.431524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.432893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.432927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:27.597 [2024-09-28 01:33:23.432936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:20:27.597 [2024-09-28 01:33:23.432944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.432970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.432979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:27.597 [2024-09-28 01:33:23.432987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:27.597 [2024-09-28 01:33:23.432994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.433025] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:27.597 [2024-09-28 01:33:23.433034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.433042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:27.597 [2024-09-28 01:33:23.433052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:27.597 [2024-09-28 01:33:23.433059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.455944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.455974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:27.597 [2024-09-28 01:33:23.455984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.868 ms 00:20:27.597 [2024-09-28 01:33:23.455992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.456059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.597 [2024-09-28 01:33:23.456068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:27.597 [2024-09-28 01:33:23.456076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:27.597 [2024-09-28 01:33:23.456083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.597 [2024-09-28 01:33:23.456977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.016 ms, result 0 00:20:49.033  Copying: 972/1048576 [kB] (972 kBps) Copying: 5544/1048576 [kB] (4572 kBps) Copying: 54/1024 [MB] (49 MBps) Copying: 109/1024 [MB] (54 MBps) Copying: 166/1024 [MB] (56 MBps) Copying: 218/1024 [MB] (52 MBps) Copying: 270/1024 [MB] (51 MBps) Copying: 325/1024 [MB] (55 MBps) Copying: 378/1024 [MB] (52 MBps) Copying: 431/1024 [MB] (52 MBps) Copying: 485/1024 [MB] (54 MBps) Copying: 539/1024 [MB] (54 MBps) Copying: 592/1024 [MB] (52 MBps) Copying: 646/1024 [MB] (53 MBps) Copying: 699/1024 [MB] (53 MBps) Copying: 753/1024 [MB] (53 MBps) Copying: 805/1024 [MB] (52 MBps) Copying: 858/1024 [MB] (52 MBps) Copying: 912/1024 [MB] (53 MBps) Copying: 966/1024 [MB] (54 MBps) Copying: 1020/1024 [MB] (54 MBps) Copying: 1024/1024 [MB] (average 48 MBps)[2024-09-28 01:33:44.878507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.033 [2024-09-28 01:33:44.878601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:49.033 [2024-09-28 01:33:44.878626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:49.033 [2024-09-28 01:33:44.878643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.033 [2024-09-28 01:33:44.878683] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:49.033 [2024-09-28 01:33:44.884002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.033 [2024-09-28 01:33:44.884062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:49.033 [2024-09-28 01:33:44.884082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.292 ms 00:20:49.033 [2024-09-28 01:33:44.884097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.033 [2024-09-28 01:33:44.884553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.033 [2024-09-28 01:33:44.884591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:49.033 [2024-09-28 01:33:44.884608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:20:49.033 [2024-09-28 01:33:44.884624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.033 [2024-09-28 01:33:44.897886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.033 [2024-09-28 01:33:44.897922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:49.033 [2024-09-28 01:33:44.897933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.234 ms 00:20:49.033 [2024-09-28 01:33:44.897945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.033 [2024-09-28 01:33:44.904057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.033 [2024-09-28 01:33:44.904085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:49.033 [2024-09-28 01:33:44.904094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.088 ms 00:20:49.033 [2024-09-28 01:33:44.904101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.033 [2024-09-28 01:33:44.927164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.033 [2024-09-28 01:33:44.927203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:49.033 [2024-09-28 01:33:44.927213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.015 ms 00:20:49.033 [2024-09-28 01:33:44.927220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.033 [2024-09-28 01:33:44.940971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.033 [2024-09-28 01:33:44.941001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:49.033 [2024-09-28 01:33:44.941012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.719 ms 00:20:49.033 [2024-09-28 01:33:44.941020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.034 [2024-09-28 01:33:44.942696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.034 [2024-09-28 01:33:44.942726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:49.034 [2024-09-28 01:33:44.942735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:20:49.034 [2024-09-28 01:33:44.942743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.293 [2024-09-28 01:33:44.965708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.293 [2024-09-28 01:33:44.965741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:49.293 [2024-09-28 01:33:44.965751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.950 ms 00:20:49.293 [2024-09-28 01:33:44.965759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.293 [2024-09-28 01:33:44.988441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.293 [2024-09-28 01:33:44.988476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:49.293 [2024-09-28 01:33:44.988486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.648 ms 00:20:49.293 [2024-09-28 01:33:44.988493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.293 [2024-09-28 01:33:45.010610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.293 [2024-09-28 01:33:45.010644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:49.293 [2024-09-28 01:33:45.010654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.084 ms 00:20:49.293 [2024-09-28 01:33:45.010661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.293 [2024-09-28 01:33:45.032916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.293 [2024-09-28 01:33:45.032954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:49.294 [2024-09-28 01:33:45.032966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.199 ms 00:20:49.294 [2024-09-28 01:33:45.032973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.294 [2024-09-28 01:33:45.033004] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:49.294 [2024-09-28 01:33:45.033018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:20:49.294 [2024-09-28 01:33:45.033033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:20:49.294 [2024-09-28 01:33:45.033042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:49.294 [2024-09-28 01:33:45.033683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:49.295 [2024-09-28 01:33:45.033797] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:49.295 [2024-09-28 01:33:45.033804] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0e3f423b-615b-429f-bbea-6f306275c43f 00:20:49.295 [2024-09-28 01:33:45.033811] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:20:49.295 [2024-09-28 01:33:45.033818] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136640 00:20:49.295 [2024-09-28 01:33:45.033824] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134656 00:20:49.295 [2024-09-28 01:33:45.033831] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:20:49.295 [2024-09-28 01:33:45.033838] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:49.295 [2024-09-28 01:33:45.033846] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:49.295 [2024-09-28 01:33:45.033853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:49.295 [2024-09-28 01:33:45.033859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:49.295 [2024-09-28 01:33:45.033865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:49.295 [2024-09-28 01:33:45.033872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.295 [2024-09-28 01:33:45.033879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:49.295 [2024-09-28 01:33:45.033893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:20:49.295 [2024-09-28 01:33:45.033902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.046345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.295 [2024-09-28 01:33:45.046378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:49.295 [2024-09-28 01:33:45.046388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.427 ms 00:20:49.295 [2024-09-28 01:33:45.046395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.046752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.295 [2024-09-28 01:33:45.046771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:49.295 [2024-09-28 01:33:45.046780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:20:49.295 [2024-09-28 01:33:45.046787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.074616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.074663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.295 [2024-09-28 01:33:45.074673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.074681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.074747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.074758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.295 [2024-09-28 01:33:45.074766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.074773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.074834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.074843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.295 [2024-09-28 01:33:45.074851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.074858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.074873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.074880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.295 [2024-09-28 01:33:45.074890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.074898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.150396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.150445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.295 [2024-09-28 01:33:45.150456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.150464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.213215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.213265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.295 [2024-09-28 01:33:45.213276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.213283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.213346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.213355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.295 [2024-09-28 01:33:45.213363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.213370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.213400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.213408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.295 [2024-09-28 01:33:45.213416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.213427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.213512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.213521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.295 [2024-09-28 01:33:45.213529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.213536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.213562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.213571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:49.295 [2024-09-28 01:33:45.213578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.213585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.213619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.213627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.295 [2024-09-28 01:33:45.213635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.213642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.213680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.295 [2024-09-28 01:33:45.213689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.295 [2024-09-28 01:33:45.213696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.295 [2024-09-28 01:33:45.213706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.295 [2024-09-28 01:33:45.213809] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.311 ms, result 0 00:20:51.195 00:20:51.195 00:20:51.196 01:33:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:53.115 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:53.115 01:33:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:53.115 [2024-09-28 01:33:48.643405] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:53.115 [2024-09-28 01:33:48.643530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77151 ] 00:20:53.115 [2024-09-28 01:33:48.786537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.115 [2024-09-28 01:33:48.965292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.374 [2024-09-28 01:33:49.215735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:53.374 [2024-09-28 01:33:49.215797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:53.634 [2024-09-28 01:33:49.369701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.369746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:53.634 [2024-09-28 01:33:49.369759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:53.634 [2024-09-28 01:33:49.369771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.369814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.369824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:53.634 [2024-09-28 01:33:49.369832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:53.634 [2024-09-28 01:33:49.369840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.369856] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:53.634 [2024-09-28 01:33:49.370521] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:53.634 [2024-09-28 01:33:49.370543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.370550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:53.634 [2024-09-28 01:33:49.370559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:20:53.634 [2024-09-28 01:33:49.370566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.371566] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:53.634 [2024-09-28 01:33:49.383673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.383707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:53.634 [2024-09-28 01:33:49.383718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.108 ms 00:20:53.634 [2024-09-28 01:33:49.383726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.383780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.383790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:53.634 [2024-09-28 01:33:49.383798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:53.634 [2024-09-28 01:33:49.383806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.388386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.388415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:53.634 [2024-09-28 01:33:49.388425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.521 ms 00:20:53.634 [2024-09-28 01:33:49.388433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.388506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.388515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:53.634 [2024-09-28 01:33:49.388523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:53.634 [2024-09-28 01:33:49.388530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.388571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.388580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:53.634 [2024-09-28 01:33:49.388587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:53.634 [2024-09-28 01:33:49.388594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.388615] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:53.634 [2024-09-28 01:33:49.391952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.391981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:53.634 [2024-09-28 01:33:49.391990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.342 ms 00:20:53.634 [2024-09-28 01:33:49.391997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.392024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.392032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:53.634 [2024-09-28 01:33:49.392039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:53.634 [2024-09-28 01:33:49.392047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.392068] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:53.634 [2024-09-28 01:33:49.392085] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:53.634 [2024-09-28 01:33:49.392119] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:53.634 [2024-09-28 01:33:49.392134] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:53.634 [2024-09-28 01:33:49.392245] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:53.634 [2024-09-28 01:33:49.392261] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:53.634 [2024-09-28 01:33:49.392271] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:53.634 [2024-09-28 01:33:49.392283] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:53.634 [2024-09-28 01:33:49.392292] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:53.634 [2024-09-28 01:33:49.392300] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:53.634 [2024-09-28 01:33:49.392308] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:53.634 [2024-09-28 01:33:49.392315] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:53.634 [2024-09-28 01:33:49.392322] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:53.634 [2024-09-28 01:33:49.392329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.392336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:53.634 [2024-09-28 01:33:49.392344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:53.634 [2024-09-28 01:33:49.392350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.392432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.634 [2024-09-28 01:33:49.392442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:53.634 [2024-09-28 01:33:49.392449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:53.634 [2024-09-28 01:33:49.392456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.634 [2024-09-28 01:33:49.392569] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:53.634 [2024-09-28 01:33:49.392586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:53.634 [2024-09-28 01:33:49.392595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:53.634 [2024-09-28 01:33:49.392603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.634 [2024-09-28 01:33:49.392610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:53.634 [2024-09-28 01:33:49.392617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:53.634 [2024-09-28 01:33:49.392623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:53.634 [2024-09-28 01:33:49.392631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:53.634 [2024-09-28 01:33:49.392638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:53.634 [2024-09-28 01:33:49.392644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:53.634 [2024-09-28 01:33:49.392651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:53.634 [2024-09-28 01:33:49.392657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:53.634 [2024-09-28 01:33:49.392663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:53.634 [2024-09-28 01:33:49.392675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:53.634 [2024-09-28 01:33:49.392682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:53.634 [2024-09-28 01:33:49.392688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.634 [2024-09-28 01:33:49.392695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:53.634 [2024-09-28 01:33:49.392702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:53.634 [2024-09-28 01:33:49.392709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.634 [2024-09-28 01:33:49.392716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:53.634 [2024-09-28 01:33:49.392723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:53.634 [2024-09-28 01:33:49.392729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.634 [2024-09-28 01:33:49.392735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:53.634 [2024-09-28 01:33:49.392742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:53.634 [2024-09-28 01:33:49.392748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.634 [2024-09-28 01:33:49.392754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:53.634 [2024-09-28 01:33:49.392761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:53.634 [2024-09-28 01:33:49.392767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.634 [2024-09-28 01:33:49.392773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:53.634 [2024-09-28 01:33:49.392780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:53.634 [2024-09-28 01:33:49.392786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:53.634 [2024-09-28 01:33:49.392792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:53.634 [2024-09-28 01:33:49.392799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:53.634 [2024-09-28 01:33:49.392806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:53.634 [2024-09-28 01:33:49.392812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:53.635 [2024-09-28 01:33:49.392826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:53.635 [2024-09-28 01:33:49.392832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:53.635 [2024-09-28 01:33:49.392838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:53.635 [2024-09-28 01:33:49.392844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:53.635 [2024-09-28 01:33:49.392851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.635 [2024-09-28 01:33:49.392857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:53.635 [2024-09-28 01:33:49.392864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:53.635 [2024-09-28 01:33:49.392870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.635 [2024-09-28 01:33:49.392877] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:53.635 [2024-09-28 01:33:49.392884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:53.635 [2024-09-28 01:33:49.392893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:53.635 [2024-09-28 01:33:49.392900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:53.635 [2024-09-28 01:33:49.392907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:53.635 [2024-09-28 01:33:49.392914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:53.635 [2024-09-28 01:33:49.392921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:53.635 [2024-09-28 01:33:49.392929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:53.635 [2024-09-28 01:33:49.392935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:53.635 [2024-09-28 01:33:49.392942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:53.635 [2024-09-28 01:33:49.392950] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:53.635 [2024-09-28 01:33:49.392958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:53.635 [2024-09-28 01:33:49.392967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:53.635 [2024-09-28 01:33:49.392974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:53.635 [2024-09-28 01:33:49.392981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:53.635 [2024-09-28 01:33:49.392987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:53.635 [2024-09-28 01:33:49.392994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:53.635 [2024-09-28 01:33:49.393001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:53.635 [2024-09-28 01:33:49.393008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:53.635 [2024-09-28 01:33:49.393014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:53.635 [2024-09-28 01:33:49.393021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:53.635 [2024-09-28 01:33:49.393028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:53.635 [2024-09-28 01:33:49.393035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:53.635 [2024-09-28 01:33:49.393042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:53.635 [2024-09-28 01:33:49.393049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:53.635 [2024-09-28 01:33:49.393055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:53.635 [2024-09-28 01:33:49.393062] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:53.635 [2024-09-28 01:33:49.393070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:53.635 [2024-09-28 01:33:49.393078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:53.635 [2024-09-28 01:33:49.393085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:53.635 [2024-09-28 01:33:49.393092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:53.635 [2024-09-28 01:33:49.393099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:53.635 [2024-09-28 01:33:49.393107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.393114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:53.635 [2024-09-28 01:33:49.393121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:20:53.635 [2024-09-28 01:33:49.393128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.427174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.427236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:53.635 [2024-09-28 01:33:49.427251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.003 ms 00:20:53.635 [2024-09-28 01:33:49.427261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.427373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.427384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:53.635 [2024-09-28 01:33:49.427394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:53.635 [2024-09-28 01:33:49.427403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.457699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.457733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:53.635 [2024-09-28 01:33:49.457747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.226 ms 00:20:53.635 [2024-09-28 01:33:49.457754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.457790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.457798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:53.635 [2024-09-28 01:33:49.457806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:53.635 [2024-09-28 01:33:49.457813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.458155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.458178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:53.635 [2024-09-28 01:33:49.458188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:20:53.635 [2024-09-28 01:33:49.458210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.458331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.458340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:53.635 [2024-09-28 01:33:49.458349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:53.635 [2024-09-28 01:33:49.458356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.470482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.470511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:53.635 [2024-09-28 01:33:49.470521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.105 ms 00:20:53.635 [2024-09-28 01:33:49.470529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.482788] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:53.635 [2024-09-28 01:33:49.482821] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:53.635 [2024-09-28 01:33:49.482833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.482841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:53.635 [2024-09-28 01:33:49.482850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.194 ms 00:20:53.635 [2024-09-28 01:33:49.482858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.507053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.507087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:53.635 [2024-09-28 01:33:49.507097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.158 ms 00:20:53.635 [2024-09-28 01:33:49.507105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.518778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.518808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:53.635 [2024-09-28 01:33:49.518818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.647 ms 00:20:53.635 [2024-09-28 01:33:49.518825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.529978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.530008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:53.635 [2024-09-28 01:33:49.530018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.115 ms 00:20:53.635 [2024-09-28 01:33:49.530026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.635 [2024-09-28 01:33:49.530632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.635 [2024-09-28 01:33:49.530657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:53.635 [2024-09-28 01:33:49.530666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:20:53.635 [2024-09-28 01:33:49.530673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.895 [2024-09-28 01:33:49.584642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.895 [2024-09-28 01:33:49.584696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:53.895 [2024-09-28 01:33:49.584708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.951 ms 00:20:53.895 [2024-09-28 01:33:49.584716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.895 [2024-09-28 01:33:49.594997] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:53.895 [2024-09-28 01:33:49.597486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.895 [2024-09-28 01:33:49.597517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:53.895 [2024-09-28 01:33:49.597530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.717 ms 00:20:53.895 [2024-09-28 01:33:49.597541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.895 [2024-09-28 01:33:49.597637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.895 [2024-09-28 01:33:49.597648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:53.895 [2024-09-28 01:33:49.597657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:53.895 [2024-09-28 01:33:49.597664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.895 [2024-09-28 01:33:49.598222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.895 [2024-09-28 01:33:49.598251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:53.895 [2024-09-28 01:33:49.598260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:20:53.895 [2024-09-28 01:33:49.598267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.895 [2024-09-28 01:33:49.598294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.895 [2024-09-28 01:33:49.598303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:53.895 [2024-09-28 01:33:49.598310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:53.895 [2024-09-28 01:33:49.598318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.895 [2024-09-28 01:33:49.598348] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:53.895 [2024-09-28 01:33:49.598358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.895 [2024-09-28 01:33:49.598366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:53.895 [2024-09-28 01:33:49.598377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:53.895 [2024-09-28 01:33:49.598384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.895 [2024-09-28 01:33:49.621430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.895 [2024-09-28 01:33:49.621466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:53.895 [2024-09-28 01:33:49.621477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.030 ms 00:20:53.895 [2024-09-28 01:33:49.621485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.895 [2024-09-28 01:33:49.621555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:53.895 [2024-09-28 01:33:49.621565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:53.895 [2024-09-28 01:33:49.621573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:53.895 [2024-09-28 01:33:49.621580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:53.895 [2024-09-28 01:33:49.622801] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 252.682 ms, result 0 00:21:16.077  Copying: 42/1024 [MB] (42 MBps) Copying: 90/1024 [MB] (47 MBps) Copying: 140/1024 [MB] (49 MBps) Copying: 187/1024 [MB] (46 MBps) Copying: 234/1024 [MB] (46 MBps) Copying: 284/1024 [MB] (50 MBps) Copying: 332/1024 [MB] (47 MBps) Copying: 381/1024 [MB] (49 MBps) Copying: 430/1024 [MB] (48 MBps) Copying: 480/1024 [MB] (50 MBps) Copying: 531/1024 [MB] (51 MBps) Copying: 580/1024 [MB] (48 MBps) Copying: 631/1024 [MB] (50 MBps) Copying: 681/1024 [MB] (50 MBps) Copying: 730/1024 [MB] (49 MBps) Copying: 781/1024 [MB] (50 MBps) Copying: 829/1024 [MB] (48 MBps) Copying: 876/1024 [MB] (47 MBps) Copying: 924/1024 [MB] (47 MBps) Copying: 974/1024 [MB] (50 MBps) Copying: 1022/1024 [MB] (47 MBps) Copying: 1024/1024 [MB] (average 48 MBps)[2024-09-28 01:34:11.772903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.772965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:16.077 [2024-09-28 01:34:11.772978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:16.077 [2024-09-28 01:34:11.772990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.773010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:16.077 [2024-09-28 01:34:11.775598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.775625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:16.077 [2024-09-28 01:34:11.775635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:21:16.077 [2024-09-28 01:34:11.775643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.775857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.775874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:16.077 [2024-09-28 01:34:11.775882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:21:16.077 [2024-09-28 01:34:11.775889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.779332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.779349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:16.077 [2024-09-28 01:34:11.779357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.427 ms 00:21:16.077 [2024-09-28 01:34:11.779365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.787024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.787053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:16.077 [2024-09-28 01:34:11.787063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.644 ms 00:21:16.077 [2024-09-28 01:34:11.787072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.810615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.810643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:16.077 [2024-09-28 01:34:11.810654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.479 ms 00:21:16.077 [2024-09-28 01:34:11.810662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.824221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.824252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:16.077 [2024-09-28 01:34:11.824263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.541 ms 00:21:16.077 [2024-09-28 01:34:11.824271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.826171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.826220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:16.077 [2024-09-28 01:34:11.826232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:21:16.077 [2024-09-28 01:34:11.826240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.849094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.849121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:16.077 [2024-09-28 01:34:11.849131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.839 ms 00:21:16.077 [2024-09-28 01:34:11.849139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.871950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.871976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:16.077 [2024-09-28 01:34:11.871985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.794 ms 00:21:16.077 [2024-09-28 01:34:11.871992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.894168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.894207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:16.077 [2024-09-28 01:34:11.894217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.158 ms 00:21:16.077 [2024-09-28 01:34:11.894224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.916307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.077 [2024-09-28 01:34:11.916331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:16.077 [2024-09-28 01:34:11.916340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.045 ms 00:21:16.077 [2024-09-28 01:34:11.916348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.077 [2024-09-28 01:34:11.916365] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:16.077 [2024-09-28 01:34:11.916377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:16.077 [2024-09-28 01:34:11.916387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:21:16.077 [2024-09-28 01:34:11.916396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:16.077 [2024-09-28 01:34:11.916653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.916995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:16.078 [2024-09-28 01:34:11.917124] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:16.078 [2024-09-28 01:34:11.917132] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0e3f423b-615b-429f-bbea-6f306275c43f 00:21:16.078 [2024-09-28 01:34:11.917139] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:21:16.078 [2024-09-28 01:34:11.917146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:16.078 [2024-09-28 01:34:11.917153] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:16.078 [2024-09-28 01:34:11.917160] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:16.078 [2024-09-28 01:34:11.917167] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:16.078 [2024-09-28 01:34:11.917178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:16.078 [2024-09-28 01:34:11.917185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:16.078 [2024-09-28 01:34:11.917200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:16.078 [2024-09-28 01:34:11.917207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:16.078 [2024-09-28 01:34:11.917213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.078 [2024-09-28 01:34:11.917228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:16.078 [2024-09-28 01:34:11.917236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:21:16.078 [2024-09-28 01:34:11.917243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.078 [2024-09-28 01:34:11.929206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.078 [2024-09-28 01:34:11.929223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:16.078 [2024-09-28 01:34:11.929232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.947 ms 00:21:16.078 [2024-09-28 01:34:11.929243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.078 [2024-09-28 01:34:11.929573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.078 [2024-09-28 01:34:11.929588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:16.078 [2024-09-28 01:34:11.929596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:21:16.078 [2024-09-28 01:34:11.929604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.078 [2024-09-28 01:34:11.957093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.078 [2024-09-28 01:34:11.957127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:16.078 [2024-09-28 01:34:11.957136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.078 [2024-09-28 01:34:11.957144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.078 [2024-09-28 01:34:11.957212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.078 [2024-09-28 01:34:11.957221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:16.078 [2024-09-28 01:34:11.957229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.078 [2024-09-28 01:34:11.957236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.078 [2024-09-28 01:34:11.957296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.078 [2024-09-28 01:34:11.957306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:16.078 [2024-09-28 01:34:11.957314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.078 [2024-09-28 01:34:11.957323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.079 [2024-09-28 01:34:11.957338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.079 [2024-09-28 01:34:11.957345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:16.079 [2024-09-28 01:34:11.957352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.079 [2024-09-28 01:34:11.957359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.336 [2024-09-28 01:34:12.034050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.337 [2024-09-28 01:34:12.034097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:16.337 [2024-09-28 01:34:12.034112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.337 [2024-09-28 01:34:12.034120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.337 [2024-09-28 01:34:12.097070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.337 [2024-09-28 01:34:12.097111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:16.337 [2024-09-28 01:34:12.097123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.337 [2024-09-28 01:34:12.097131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.337 [2024-09-28 01:34:12.097208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.337 [2024-09-28 01:34:12.097218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:16.337 [2024-09-28 01:34:12.097226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.337 [2024-09-28 01:34:12.097233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.337 [2024-09-28 01:34:12.097272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.337 [2024-09-28 01:34:12.097280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:16.337 [2024-09-28 01:34:12.097288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.337 [2024-09-28 01:34:12.097295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.337 [2024-09-28 01:34:12.097381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.337 [2024-09-28 01:34:12.097391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:16.337 [2024-09-28 01:34:12.097399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.337 [2024-09-28 01:34:12.097407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.337 [2024-09-28 01:34:12.097436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.337 [2024-09-28 01:34:12.097450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:16.337 [2024-09-28 01:34:12.097458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.337 [2024-09-28 01:34:12.097465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.337 [2024-09-28 01:34:12.097497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.337 [2024-09-28 01:34:12.097505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:16.337 [2024-09-28 01:34:12.097512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.337 [2024-09-28 01:34:12.097519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.337 [2024-09-28 01:34:12.097560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:16.337 [2024-09-28 01:34:12.097571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:16.337 [2024-09-28 01:34:12.097578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:16.337 [2024-09-28 01:34:12.097586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.337 [2024-09-28 01:34:12.097692] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 324.764 ms, result 0 00:21:17.271 00:21:17.271 00:21:17.271 01:34:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:21:19.218 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:21:19.218 01:34:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:21:19.218 01:34:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:21:19.218 01:34:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:19.218 01:34:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:19.218 01:34:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:19.475 01:34:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:19.475 01:34:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:21:19.475 01:34:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 75919 00:21:19.475 01:34:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 75919 ']' 00:21:19.475 01:34:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 75919 00:21:19.475 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75919) - No such process 00:21:19.475 Process with pid 75919 is not found 00:21:19.475 01:34:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 75919 is not found' 00:21:19.475 01:34:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:21:19.733 01:34:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:21:19.733 Remove shared memory files 00:21:19.733 01:34:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:19.733 01:34:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:21:19.733 01:34:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:21:19.733 01:34:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:21:19.733 01:34:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:19.733 01:34:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:21:19.733 00:21:19.733 real 2m18.813s 00:21:19.733 user 2m35.837s 00:21:19.733 sys 0m22.666s 00:21:19.733 01:34:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:19.733 01:34:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:19.733 ************************************ 00:21:19.733 END TEST ftl_dirty_shutdown 00:21:19.733 ************************************ 00:21:19.733 01:34:15 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:21:19.733 01:34:15 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:19.733 01:34:15 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:19.733 01:34:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:19.733 ************************************ 00:21:19.733 START TEST ftl_upgrade_shutdown 00:21:19.733 ************************************ 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:21:19.733 * Looking for test storage... 00:21:19.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:19.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.733 --rc genhtml_branch_coverage=1 00:21:19.733 --rc genhtml_function_coverage=1 00:21:19.733 --rc genhtml_legend=1 00:21:19.733 --rc geninfo_all_blocks=1 00:21:19.733 --rc geninfo_unexecuted_blocks=1 00:21:19.733 00:21:19.733 ' 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:19.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.733 --rc genhtml_branch_coverage=1 00:21:19.733 --rc genhtml_function_coverage=1 00:21:19.733 --rc genhtml_legend=1 00:21:19.733 --rc geninfo_all_blocks=1 00:21:19.733 --rc geninfo_unexecuted_blocks=1 00:21:19.733 00:21:19.733 ' 00:21:19.733 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:19.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.734 --rc genhtml_branch_coverage=1 00:21:19.734 --rc genhtml_function_coverage=1 00:21:19.734 --rc genhtml_legend=1 00:21:19.734 --rc geninfo_all_blocks=1 00:21:19.734 --rc geninfo_unexecuted_blocks=1 00:21:19.734 00:21:19.734 ' 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:19.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.734 --rc genhtml_branch_coverage=1 00:21:19.734 --rc genhtml_function_coverage=1 00:21:19.734 --rc genhtml_legend=1 00:21:19.734 --rc geninfo_all_blocks=1 00:21:19.734 --rc geninfo_unexecuted_blocks=1 00:21:19.734 00:21:19.734 ' 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=77502 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 77502 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77502 ']' 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.734 01:34:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:19.991 [2024-09-28 01:34:15.734862] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:19.992 [2024-09-28 01:34:15.735376] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77502 ] 00:21:19.992 [2024-09-28 01:34:15.878624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.249 [2024-09-28 01:34:16.057143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:20.813 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:21:21.072 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:21:21.072 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:21.072 01:34:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:21:21.072 01:34:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:21:21.072 01:34:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:21.072 01:34:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:21.072 01:34:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:21.072 01:34:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:21.330 { 00:21:21.330 "name": "basen1", 00:21:21.330 "aliases": [ 00:21:21.330 "92567e3e-5053-4f4e-87b2-40aa5bcb0502" 00:21:21.330 ], 00:21:21.330 "product_name": "NVMe disk", 00:21:21.330 "block_size": 4096, 00:21:21.330 "num_blocks": 1310720, 00:21:21.330 "uuid": "92567e3e-5053-4f4e-87b2-40aa5bcb0502", 00:21:21.330 "numa_id": -1, 00:21:21.330 "assigned_rate_limits": { 00:21:21.330 "rw_ios_per_sec": 0, 00:21:21.330 "rw_mbytes_per_sec": 0, 00:21:21.330 "r_mbytes_per_sec": 0, 00:21:21.330 "w_mbytes_per_sec": 0 00:21:21.330 }, 00:21:21.330 "claimed": true, 00:21:21.330 "claim_type": "read_many_write_one", 00:21:21.330 "zoned": false, 00:21:21.330 "supported_io_types": { 00:21:21.330 "read": true, 00:21:21.330 "write": true, 00:21:21.330 "unmap": true, 00:21:21.330 "flush": true, 00:21:21.330 "reset": true, 00:21:21.330 "nvme_admin": true, 00:21:21.330 "nvme_io": true, 00:21:21.330 "nvme_io_md": false, 00:21:21.330 "write_zeroes": true, 00:21:21.330 "zcopy": false, 00:21:21.330 "get_zone_info": false, 00:21:21.330 "zone_management": false, 00:21:21.330 "zone_append": false, 00:21:21.330 "compare": true, 00:21:21.330 "compare_and_write": false, 00:21:21.330 "abort": true, 00:21:21.330 "seek_hole": false, 00:21:21.330 "seek_data": false, 00:21:21.330 "copy": true, 00:21:21.330 "nvme_iov_md": false 00:21:21.330 }, 00:21:21.330 "driver_specific": { 00:21:21.330 "nvme": [ 00:21:21.330 { 00:21:21.330 "pci_address": "0000:00:11.0", 00:21:21.330 "trid": { 00:21:21.330 "trtype": "PCIe", 00:21:21.330 "traddr": "0000:00:11.0" 00:21:21.330 }, 00:21:21.330 "ctrlr_data": { 00:21:21.330 "cntlid": 0, 00:21:21.330 "vendor_id": "0x1b36", 00:21:21.330 "model_number": "QEMU NVMe Ctrl", 00:21:21.330 "serial_number": "12341", 00:21:21.330 "firmware_revision": "8.0.0", 00:21:21.330 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:21.330 "oacs": { 00:21:21.330 "security": 0, 00:21:21.330 "format": 1, 00:21:21.330 "firmware": 0, 00:21:21.330 "ns_manage": 1 00:21:21.330 }, 00:21:21.330 "multi_ctrlr": false, 00:21:21.330 "ana_reporting": false 00:21:21.330 }, 00:21:21.330 "vs": { 00:21:21.330 "nvme_version": "1.4" 00:21:21.330 }, 00:21:21.330 "ns_data": { 00:21:21.330 "id": 1, 00:21:21.330 "can_share": false 00:21:21.330 } 00:21:21.330 } 00:21:21.330 ], 00:21:21.330 "mp_policy": "active_passive" 00:21:21.330 } 00:21:21.330 } 00:21:21.330 ]' 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:21.330 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:21.588 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=21bd9d3b-4dd8-4cdf-ad00-4db8ca068a76 00:21:21.588 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:21.588 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21bd9d3b-4dd8-4cdf-ad00-4db8ca068a76 00:21:21.846 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:21:22.104 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=90db5943-eb6a-4f09-a3f0-92863c618a7e 00:21:22.104 01:34:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 90db5943-eb6a-4f09-a3f0-92863c618a7e 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=bbb21a89-d330-4e88-9663-11639970549d 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z bbb21a89-d330-4e88-9663-11639970549d ]] 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 bbb21a89-d330-4e88-9663-11639970549d 5120 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=bbb21a89-d330-4e88-9663-11639970549d 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size bbb21a89-d330-4e88-9663-11639970549d 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=bbb21a89-d330-4e88-9663-11639970549d 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:22.104 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bbb21a89-d330-4e88-9663-11639970549d 00:21:22.361 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:22.361 { 00:21:22.361 "name": "bbb21a89-d330-4e88-9663-11639970549d", 00:21:22.361 "aliases": [ 00:21:22.361 "lvs/basen1p0" 00:21:22.361 ], 00:21:22.361 "product_name": "Logical Volume", 00:21:22.361 "block_size": 4096, 00:21:22.361 "num_blocks": 5242880, 00:21:22.361 "uuid": "bbb21a89-d330-4e88-9663-11639970549d", 00:21:22.361 "assigned_rate_limits": { 00:21:22.361 "rw_ios_per_sec": 0, 00:21:22.361 "rw_mbytes_per_sec": 0, 00:21:22.361 "r_mbytes_per_sec": 0, 00:21:22.361 "w_mbytes_per_sec": 0 00:21:22.361 }, 00:21:22.361 "claimed": false, 00:21:22.361 "zoned": false, 00:21:22.361 "supported_io_types": { 00:21:22.361 "read": true, 00:21:22.361 "write": true, 00:21:22.361 "unmap": true, 00:21:22.361 "flush": false, 00:21:22.361 "reset": true, 00:21:22.361 "nvme_admin": false, 00:21:22.361 "nvme_io": false, 00:21:22.361 "nvme_io_md": false, 00:21:22.361 "write_zeroes": true, 00:21:22.361 "zcopy": false, 00:21:22.361 "get_zone_info": false, 00:21:22.361 "zone_management": false, 00:21:22.361 "zone_append": false, 00:21:22.361 "compare": false, 00:21:22.361 "compare_and_write": false, 00:21:22.361 "abort": false, 00:21:22.361 "seek_hole": true, 00:21:22.361 "seek_data": true, 00:21:22.361 "copy": false, 00:21:22.361 "nvme_iov_md": false 00:21:22.361 }, 00:21:22.361 "driver_specific": { 00:21:22.361 "lvol": { 00:21:22.361 "lvol_store_uuid": "90db5943-eb6a-4f09-a3f0-92863c618a7e", 00:21:22.361 "base_bdev": "basen1", 00:21:22.361 "thin_provision": true, 00:21:22.361 "num_allocated_clusters": 0, 00:21:22.362 "snapshot": false, 00:21:22.362 "clone": false, 00:21:22.362 "esnap_clone": false 00:21:22.362 } 00:21:22.362 } 00:21:22.362 } 00:21:22.362 ]' 00:21:22.362 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:22.362 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:22.362 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:22.362 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:21:22.362 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:21:22.362 01:34:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:21:22.362 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:21:22.362 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:22.362 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:21:22.620 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:21:22.620 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:21:22.620 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:21:22.878 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:21:22.878 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:21:22.878 01:34:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d bbb21a89-d330-4e88-9663-11639970549d -c cachen1p0 --l2p_dram_limit 2 00:21:23.136 [2024-09-28 01:34:18.922277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.136 [2024-09-28 01:34:18.922320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:21:23.136 [2024-09-28 01:34:18.922333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:21:23.136 [2024-09-28 01:34:18.922339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.136 [2024-09-28 01:34:18.922383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.136 [2024-09-28 01:34:18.922391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:21:23.136 [2024-09-28 01:34:18.922402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:21:23.136 [2024-09-28 01:34:18.922408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.136 [2024-09-28 01:34:18.922426] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:21:23.136 [2024-09-28 01:34:18.923031] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:21:23.136 [2024-09-28 01:34:18.923050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.136 [2024-09-28 01:34:18.923056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:21:23.136 [2024-09-28 01:34:18.923066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.627 ms 00:21:23.136 [2024-09-28 01:34:18.923073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.136 [2024-09-28 01:34:18.923099] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f9de8392-050f-423e-8d61-7e1879835680 00:21:23.136 [2024-09-28 01:34:18.924054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.136 [2024-09-28 01:34:18.924083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:21:23.136 [2024-09-28 01:34:18.924091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:21:23.136 [2024-09-28 01:34:18.924098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.136 [2024-09-28 01:34:18.928973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.136 [2024-09-28 01:34:18.929091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:21:23.136 [2024-09-28 01:34:18.929104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.817 ms 00:21:23.136 [2024-09-28 01:34:18.929111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.136 [2024-09-28 01:34:18.929145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.137 [2024-09-28 01:34:18.929155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:21:23.137 [2024-09-28 01:34:18.929162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:21:23.137 [2024-09-28 01:34:18.929172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.137 [2024-09-28 01:34:18.929220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.137 [2024-09-28 01:34:18.929230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:21:23.137 [2024-09-28 01:34:18.929237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:21:23.137 [2024-09-28 01:34:18.929244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.137 [2024-09-28 01:34:18.929261] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:21:23.137 [2024-09-28 01:34:18.932256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.137 [2024-09-28 01:34:18.932280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:21:23.137 [2024-09-28 01:34:18.932289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.999 ms 00:21:23.137 [2024-09-28 01:34:18.932295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.137 [2024-09-28 01:34:18.932319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.137 [2024-09-28 01:34:18.932325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:21:23.137 [2024-09-28 01:34:18.932333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:21:23.137 [2024-09-28 01:34:18.932340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.137 [2024-09-28 01:34:18.932360] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:21:23.137 [2024-09-28 01:34:18.932464] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:21:23.137 [2024-09-28 01:34:18.932476] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:21:23.137 [2024-09-28 01:34:18.932485] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:21:23.137 [2024-09-28 01:34:18.932496] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:21:23.137 [2024-09-28 01:34:18.932503] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:21:23.137 [2024-09-28 01:34:18.932510] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:21:23.137 [2024-09-28 01:34:18.932515] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:21:23.137 [2024-09-28 01:34:18.932522] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:21:23.137 [2024-09-28 01:34:18.932529] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:21:23.137 [2024-09-28 01:34:18.932536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.137 [2024-09-28 01:34:18.932542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:21:23.137 [2024-09-28 01:34:18.932549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:21:23.137 [2024-09-28 01:34:18.932555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.137 [2024-09-28 01:34:18.932619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.137 [2024-09-28 01:34:18.932634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:21:23.137 [2024-09-28 01:34:18.932641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:21:23.137 [2024-09-28 01:34:18.932646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.137 [2024-09-28 01:34:18.932719] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:21:23.137 [2024-09-28 01:34:18.932727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:21:23.137 [2024-09-28 01:34:18.932734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:21:23.137 [2024-09-28 01:34:18.932740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:21:23.137 [2024-09-28 01:34:18.932753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:21:23.137 [2024-09-28 01:34:18.932766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:21:23.137 [2024-09-28 01:34:18.932773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:21:23.137 [2024-09-28 01:34:18.932778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:21:23.137 [2024-09-28 01:34:18.932790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:21:23.137 [2024-09-28 01:34:18.932802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:21:23.137 [2024-09-28 01:34:18.932816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:21:23.137 [2024-09-28 01:34:18.932828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:21:23.137 [2024-09-28 01:34:18.932840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:21:23.137 [2024-09-28 01:34:18.932847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:21:23.137 [2024-09-28 01:34:18.932859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:21:23.137 [2024-09-28 01:34:18.932864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:21:23.137 [2024-09-28 01:34:18.932872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:21:23.137 [2024-09-28 01:34:18.932877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:21:23.137 [2024-09-28 01:34:18.932883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:21:23.137 [2024-09-28 01:34:18.932888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:21:23.137 [2024-09-28 01:34:18.932895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:21:23.137 [2024-09-28 01:34:18.932900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:21:23.137 [2024-09-28 01:34:18.932906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:21:23.137 [2024-09-28 01:34:18.932911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:21:23.137 [2024-09-28 01:34:18.932917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:21:23.137 [2024-09-28 01:34:18.932922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:21:23.137 [2024-09-28 01:34:18.932930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:21:23.137 [2024-09-28 01:34:18.932935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:21:23.137 [2024-09-28 01:34:18.932946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:21:23.137 [2024-09-28 01:34:18.932952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:21:23.137 [2024-09-28 01:34:18.932963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:21:23.137 [2024-09-28 01:34:18.932979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:21:23.137 [2024-09-28 01:34:18.932986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.932990] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:21:23.137 [2024-09-28 01:34:18.932998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:21:23.137 [2024-09-28 01:34:18.933004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:21:23.137 [2024-09-28 01:34:18.933011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:21:23.137 [2024-09-28 01:34:18.933017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:21:23.137 [2024-09-28 01:34:18.933026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:21:23.137 [2024-09-28 01:34:18.933031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:21:23.137 [2024-09-28 01:34:18.933038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:21:23.137 [2024-09-28 01:34:18.933042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:21:23.137 [2024-09-28 01:34:18.933049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:21:23.138 [2024-09-28 01:34:18.933057] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:21:23.138 [2024-09-28 01:34:18.933066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:21:23.138 [2024-09-28 01:34:18.933079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:21:23.138 [2024-09-28 01:34:18.933096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:21:23.138 [2024-09-28 01:34:18.933103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:21:23.138 [2024-09-28 01:34:18.933109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:21:23.138 [2024-09-28 01:34:18.933115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:21:23.138 [2024-09-28 01:34:18.933157] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:21:23.138 [2024-09-28 01:34:18.933164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:23.138 [2024-09-28 01:34:18.933178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:21:23.138 [2024-09-28 01:34:18.933183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:21:23.138 [2024-09-28 01:34:18.933190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:21:23.138 [2024-09-28 01:34:18.933205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:23.138 [2024-09-28 01:34:18.933213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:21:23.138 [2024-09-28 01:34:18.933218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.539 ms 00:21:23.138 [2024-09-28 01:34:18.933225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:23.138 [2024-09-28 01:34:18.933268] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:21:23.138 [2024-09-28 01:34:18.933279] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:21:25.035 [2024-09-28 01:34:20.911917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.035 [2024-09-28 01:34:20.911973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:21:25.035 [2024-09-28 01:34:20.911989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1978.641 ms 00:21:25.035 [2024-09-28 01:34:20.911999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.035 [2024-09-28 01:34:20.936872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.035 [2024-09-28 01:34:20.936915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:21:25.035 [2024-09-28 01:34:20.936927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.651 ms 00:21:25.035 [2024-09-28 01:34:20.936937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.035 [2024-09-28 01:34:20.937011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.035 [2024-09-28 01:34:20.937023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:21:25.035 [2024-09-28 01:34:20.937032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:21:25.035 [2024-09-28 01:34:20.937046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:20.979730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:20.979776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:21:25.294 [2024-09-28 01:34:20.979791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.648 ms 00:21:25.294 [2024-09-28 01:34:20.979803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:20.979841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:20.979851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:21:25.294 [2024-09-28 01:34:20.979860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:21:25.294 [2024-09-28 01:34:20.979869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:20.980223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:20.980249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:21:25.294 [2024-09-28 01:34:20.980264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.301 ms 00:21:25.294 [2024-09-28 01:34:20.980276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:20.980314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:20.980324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:21:25.294 [2024-09-28 01:34:20.980331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:21:25.294 [2024-09-28 01:34:20.980342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:20.997638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:20.997669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:21:25.294 [2024-09-28 01:34:20.997679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.279 ms 00:21:25.294 [2024-09-28 01:34:20.997688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:21.008864] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:21:25.294 [2024-09-28 01:34:21.009767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:21.009796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:21:25.294 [2024-09-28 01:34:21.009810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.990 ms 00:21:25.294 [2024-09-28 01:34:21.009818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:21.035684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:21.035831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:21:25.294 [2024-09-28 01:34:21.035857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.835 ms 00:21:25.294 [2024-09-28 01:34:21.035866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:21.036244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:21.036272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:21:25.294 [2024-09-28 01:34:21.036287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:21:25.294 [2024-09-28 01:34:21.036296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:21.059016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:21.059175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:21:25.294 [2024-09-28 01:34:21.059209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.663 ms 00:21:25.294 [2024-09-28 01:34:21.059218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:21.081684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:21.081803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:21:25.294 [2024-09-28 01:34:21.081821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.428 ms 00:21:25.294 [2024-09-28 01:34:21.081829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.294 [2024-09-28 01:34:21.082400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.294 [2024-09-28 01:34:21.082417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:21:25.294 [2024-09-28 01:34:21.082427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:21:25.294 [2024-09-28 01:34:21.082435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.295 [2024-09-28 01:34:21.148719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.295 [2024-09-28 01:34:21.148900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:21:25.295 [2024-09-28 01:34:21.148923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.249 ms 00:21:25.295 [2024-09-28 01:34:21.148934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.295 [2024-09-28 01:34:21.173050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.295 [2024-09-28 01:34:21.173084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:21:25.295 [2024-09-28 01:34:21.173097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.043 ms 00:21:25.295 [2024-09-28 01:34:21.173105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.295 [2024-09-28 01:34:21.196217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.295 [2024-09-28 01:34:21.196249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:21:25.295 [2024-09-28 01:34:21.196262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.075 ms 00:21:25.295 [2024-09-28 01:34:21.196269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.295 [2024-09-28 01:34:21.219472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.295 [2024-09-28 01:34:21.219597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:21:25.295 [2024-09-28 01:34:21.219616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.164 ms 00:21:25.295 [2024-09-28 01:34:21.219624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.295 [2024-09-28 01:34:21.219660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.295 [2024-09-28 01:34:21.219669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:21:25.295 [2024-09-28 01:34:21.219684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:21:25.295 [2024-09-28 01:34:21.219691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.295 [2024-09-28 01:34:21.219769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:25.295 [2024-09-28 01:34:21.219778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:21:25.295 [2024-09-28 01:34:21.219788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:21:25.295 [2024-09-28 01:34:21.219795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:25.295 [2024-09-28 01:34:21.220625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2297.925 ms, result 0 00:21:25.554 { 00:21:25.554 "name": "ftl", 00:21:25.554 "uuid": "f9de8392-050f-423e-8d61-7e1879835680" 00:21:25.554 } 00:21:25.554 01:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:21:25.554 [2024-09-28 01:34:21.424053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.554 01:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:21:25.812 01:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:21:26.070 [2024-09-28 01:34:21.832522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:21:26.070 01:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:21:26.329 [2024-09-28 01:34:22.028880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:26.329 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:21:26.587 Fill FTL, iteration 1 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=77613 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 77613 /var/tmp/spdk.tgt.sock 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77613 ']' 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:21:26.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.587 01:34:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:26.587 [2024-09-28 01:34:22.459528] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:26.587 [2024-09-28 01:34:22.459770] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77613 ] 00:21:26.846 [2024-09-28 01:34:22.602643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.104 [2024-09-28 01:34:22.781631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.671 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.671 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:21:27.671 01:34:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:21:27.929 ftln1 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 77613 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77613 ']' 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77613 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77613 00:21:27.929 killing process with pid 77613 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77613' 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77613 00:21:27.929 01:34:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77613 00:21:29.303 01:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:21:29.303 01:34:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:21:29.563 [2024-09-28 01:34:25.253236] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:29.563 [2024-09-28 01:34:25.253355] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77656 ] 00:21:29.563 [2024-09-28 01:34:25.402864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.821 [2024-09-28 01:34:25.548073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.573  Copying: 261/1024 [MB] (261 MBps) Copying: 536/1024 [MB] (275 MBps) Copying: 809/1024 [MB] (273 MBps) Copying: 1024/1024 [MB] (average 269 MBps) 00:21:34.573 00:21:34.573 Calculate MD5 checksum, iteration 1 00:21:34.573 01:34:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:21:34.573 01:34:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:21:34.573 01:34:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:21:34.573 01:34:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:34.573 01:34:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:34.573 01:34:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:34.573 01:34:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:21:34.573 01:34:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:21:34.573 [2024-09-28 01:34:30.353270] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:34.573 [2024-09-28 01:34:30.353361] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77709 ] 00:21:34.573 [2024-09-28 01:34:30.498006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.831 [2024-09-28 01:34:30.673519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.339  Copying: 687/1024 [MB] (687 MBps) Copying: 1024/1024 [MB] (average 682 MBps) 00:21:37.339 00:21:37.339 01:34:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:21:37.339 01:34:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1361753c6b6330eaa5102e090bdb69c8 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:21:39.238 Fill FTL, iteration 2 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:21:39.238 01:34:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:21:39.238 [2024-09-28 01:34:34.748281] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:39.238 [2024-09-28 01:34:34.748404] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77761 ] 00:21:39.238 [2024-09-28 01:34:34.895690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.238 [2024-09-28 01:34:35.038131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.054  Copying: 268/1024 [MB] (268 MBps) Copying: 546/1024 [MB] (278 MBps) Copying: 828/1024 [MB] (282 MBps) Copying: 1024/1024 [MB] (average 276 MBps) 00:21:44.054 00:21:44.054 Calculate MD5 checksum, iteration 2 00:21:44.054 01:34:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:21:44.054 01:34:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:21:44.054 01:34:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:21:44.054 01:34:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:21:44.054 01:34:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:21:44.054 01:34:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:21:44.054 01:34:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:21:44.054 01:34:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:21:44.054 [2024-09-28 01:34:39.753978] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:44.054 [2024-09-28 01:34:39.754094] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77819 ] 00:21:44.054 [2024-09-28 01:34:39.903012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.312 [2024-09-28 01:34:40.048713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.440  Copying: 670/1024 [MB] (670 MBps) Copying: 1024/1024 [MB] (average 661 MBps) 00:21:50.440 00:21:50.440 01:34:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:21:50.440 01:34:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:21:51.814 01:34:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:21:51.814 01:34:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ba5bf52c52e14a8ba08a3df713ac5a25 00:21:51.814 01:34:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:21:51.814 01:34:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:21:51.814 01:34:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:21:51.814 [2024-09-28 01:34:47.724413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:51.814 [2024-09-28 01:34:47.724463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:21:51.814 [2024-09-28 01:34:47.724474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:21:51.814 [2024-09-28 01:34:47.724485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:51.814 [2024-09-28 01:34:47.724503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:51.814 [2024-09-28 01:34:47.724510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:21:51.814 [2024-09-28 01:34:47.724517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:21:51.814 [2024-09-28 01:34:47.724523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:51.814 [2024-09-28 01:34:47.724539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:51.814 [2024-09-28 01:34:47.724545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:21:51.814 [2024-09-28 01:34:47.724551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:51.814 [2024-09-28 01:34:47.724557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:51.814 [2024-09-28 01:34:47.724607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.186 ms, result 0 00:21:51.814 true 00:21:51.814 01:34:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:21:52.073 { 00:21:52.073 "name": "ftl", 00:21:52.073 "properties": [ 00:21:52.073 { 00:21:52.073 "name": "superblock_version", 00:21:52.073 "value": 5, 00:21:52.073 "read-only": true 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "name": "base_device", 00:21:52.073 "bands": [ 00:21:52.073 { 00:21:52.073 "id": 0, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 1, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 2, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 3, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 4, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 5, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 6, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 7, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 8, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 9, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 10, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 11, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 12, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 13, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 14, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 15, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 16, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 17, 00:21:52.073 "state": "FREE", 00:21:52.073 "validity": 0.0 00:21:52.073 } 00:21:52.073 ], 00:21:52.073 "read-only": true 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "name": "cache_device", 00:21:52.073 "type": "bdev", 00:21:52.073 "chunks": [ 00:21:52.073 { 00:21:52.073 "id": 0, 00:21:52.073 "state": "INACTIVE", 00:21:52.073 "utilization": 0.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 1, 00:21:52.073 "state": "CLOSED", 00:21:52.073 "utilization": 1.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 2, 00:21:52.073 "state": "CLOSED", 00:21:52.073 "utilization": 1.0 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 3, 00:21:52.073 "state": "OPEN", 00:21:52.073 "utilization": 0.001953125 00:21:52.073 }, 00:21:52.073 { 00:21:52.073 "id": 4, 00:21:52.073 "state": "OPEN", 00:21:52.073 "utilization": 0.0 00:21:52.074 } 00:21:52.074 ], 00:21:52.074 "read-only": true 00:21:52.074 }, 00:21:52.074 { 00:21:52.074 "name": "verbose_mode", 00:21:52.074 "value": true, 00:21:52.074 "unit": "", 00:21:52.074 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:21:52.074 }, 00:21:52.074 { 00:21:52.074 "name": "prep_upgrade_on_shutdown", 00:21:52.074 "value": false, 00:21:52.074 "unit": "", 00:21:52.074 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:21:52.074 } 00:21:52.074 ] 00:21:52.074 } 00:21:52.074 01:34:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:21:52.333 [2024-09-28 01:34:48.132750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:52.333 [2024-09-28 01:34:48.132788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:21:52.333 [2024-09-28 01:34:48.132798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:21:52.333 [2024-09-28 01:34:48.132803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:52.333 [2024-09-28 01:34:48.132820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:52.333 [2024-09-28 01:34:48.132834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:21:52.333 [2024-09-28 01:34:48.132840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:52.333 [2024-09-28 01:34:48.132845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:52.333 [2024-09-28 01:34:48.132860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:52.333 [2024-09-28 01:34:48.132866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:21:52.333 [2024-09-28 01:34:48.132872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:52.333 [2024-09-28 01:34:48.132877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:52.333 [2024-09-28 01:34:48.132922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.163 ms, result 0 00:21:52.333 true 00:21:52.333 01:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:21:52.333 01:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:21:52.333 01:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:21:52.592 01:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:21:52.592 01:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:21:52.592 01:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:21:52.592 [2024-09-28 01:34:48.493089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:52.592 [2024-09-28 01:34:48.493288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:21:52.592 [2024-09-28 01:34:48.493334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:21:52.592 [2024-09-28 01:34:48.493352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:52.592 [2024-09-28 01:34:48.493387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:52.592 [2024-09-28 01:34:48.493403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:21:52.592 [2024-09-28 01:34:48.493418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:21:52.592 [2024-09-28 01:34:48.493432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:52.592 [2024-09-28 01:34:48.493455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:52.592 [2024-09-28 01:34:48.493503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:21:52.592 [2024-09-28 01:34:48.493521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:21:52.592 [2024-09-28 01:34:48.493535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:52.592 [2024-09-28 01:34:48.493594] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.492 ms, result 0 00:21:52.592 true 00:21:52.592 01:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:21:52.851 { 00:21:52.851 "name": "ftl", 00:21:52.851 "properties": [ 00:21:52.851 { 00:21:52.851 "name": "superblock_version", 00:21:52.851 "value": 5, 00:21:52.851 "read-only": true 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "name": "base_device", 00:21:52.851 "bands": [ 00:21:52.851 { 00:21:52.851 "id": 0, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 1, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 2, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 3, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 4, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 5, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 6, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 7, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 8, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 9, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 10, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 11, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 12, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 13, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 14, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 15, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 16, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 17, 00:21:52.851 "state": "FREE", 00:21:52.851 "validity": 0.0 00:21:52.851 } 00:21:52.851 ], 00:21:52.851 "read-only": true 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "name": "cache_device", 00:21:52.851 "type": "bdev", 00:21:52.851 "chunks": [ 00:21:52.851 { 00:21:52.851 "id": 0, 00:21:52.851 "state": "INACTIVE", 00:21:52.851 "utilization": 0.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 1, 00:21:52.851 "state": "CLOSED", 00:21:52.851 "utilization": 1.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 2, 00:21:52.851 "state": "CLOSED", 00:21:52.851 "utilization": 1.0 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 3, 00:21:52.851 "state": "OPEN", 00:21:52.851 "utilization": 0.001953125 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "id": 4, 00:21:52.851 "state": "OPEN", 00:21:52.851 "utilization": 0.0 00:21:52.851 } 00:21:52.851 ], 00:21:52.851 "read-only": true 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "name": "verbose_mode", 00:21:52.851 "value": true, 00:21:52.851 "unit": "", 00:21:52.851 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:21:52.851 }, 00:21:52.851 { 00:21:52.851 "name": "prep_upgrade_on_shutdown", 00:21:52.851 "value": true, 00:21:52.851 "unit": "", 00:21:52.851 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:21:52.851 } 00:21:52.851 ] 00:21:52.851 } 00:21:52.851 01:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 77502 ]] 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 77502 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77502 ']' 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77502 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77502 00:21:52.852 killing process with pid 77502 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77502' 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77502 00:21:52.852 01:34:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77502 00:21:53.419 [2024-09-28 01:34:49.269985] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:21:53.419 [2024-09-28 01:34:49.282507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:53.419 [2024-09-28 01:34:49.282546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:21:53.419 [2024-09-28 01:34:49.282556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:21:53.419 [2024-09-28 01:34:49.282562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:21:53.419 [2024-09-28 01:34:49.282579] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:21:53.419 [2024-09-28 01:34:49.284648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:21:53.419 [2024-09-28 01:34:49.284677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:21:53.419 [2024-09-28 01:34:49.284685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.058 ms 00:21:53.419 [2024-09-28 01:34:49.284691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.531 [2024-09-28 01:34:56.990943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.531 [2024-09-28 01:34:56.990994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:22:01.531 [2024-09-28 01:34:56.991005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7706.207 ms 00:22:01.531 [2024-09-28 01:34:56.991011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.531 [2024-09-28 01:34:56.991918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.531 [2024-09-28 01:34:56.991935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:22:01.531 [2024-09-28 01:34:56.991942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.896 ms 00:22:01.531 [2024-09-28 01:34:56.991949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.531 [2024-09-28 01:34:56.992873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.531 [2024-09-28 01:34:56.992893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:22:01.531 [2024-09-28 01:34:56.992900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.900 ms 00:22:01.531 [2024-09-28 01:34:56.992906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.531 [2024-09-28 01:34:57.000302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.531 [2024-09-28 01:34:57.000330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:22:01.531 [2024-09-28 01:34:57.000337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.362 ms 00:22:01.531 [2024-09-28 01:34:57.000343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.531 [2024-09-28 01:34:57.005317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.531 [2024-09-28 01:34:57.005345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:22:01.531 [2024-09-28 01:34:57.005353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.949 ms 00:22:01.531 [2024-09-28 01:34:57.005363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.531 [2024-09-28 01:34:57.005418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.531 [2024-09-28 01:34:57.005426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:22:01.531 [2024-09-28 01:34:57.005432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:22:01.531 [2024-09-28 01:34:57.005438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.531 [2024-09-28 01:34:57.012362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.531 [2024-09-28 01:34:57.012385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:22:01.531 [2024-09-28 01:34:57.012392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.913 ms 00:22:01.531 [2024-09-28 01:34:57.012398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.531 [2024-09-28 01:34:57.019426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.531 [2024-09-28 01:34:57.019540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:22:01.531 [2024-09-28 01:34:57.019553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.004 ms 00:22:01.531 [2024-09-28 01:34:57.019558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.531 [2024-09-28 01:34:57.026375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.531 [2024-09-28 01:34:57.026469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:22:01.531 [2024-09-28 01:34:57.026480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.792 ms 00:22:01.532 [2024-09-28 01:34:57.026486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.033627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.532 [2024-09-28 01:34:57.033722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:22:01.532 [2024-09-28 01:34:57.033732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.079 ms 00:22:01.532 [2024-09-28 01:34:57.033737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.033759] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:22:01.532 [2024-09-28 01:34:57.033770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:01.532 [2024-09-28 01:34:57.033778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:22:01.532 [2024-09-28 01:34:57.033784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:22:01.532 [2024-09-28 01:34:57.033790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:01.532 [2024-09-28 01:34:57.033887] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:22:01.532 [2024-09-28 01:34:57.033893] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f9de8392-050f-423e-8d61-7e1879835680 00:22:01.532 [2024-09-28 01:34:57.033898] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:22:01.532 [2024-09-28 01:34:57.033906] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:22:01.532 [2024-09-28 01:34:57.033911] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:22:01.532 [2024-09-28 01:34:57.033920] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:22:01.532 [2024-09-28 01:34:57.033925] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:22:01.532 [2024-09-28 01:34:57.033930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:22:01.532 [2024-09-28 01:34:57.033936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:22:01.532 [2024-09-28 01:34:57.033941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:22:01.532 [2024-09-28 01:34:57.033945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:22:01.532 [2024-09-28 01:34:57.033952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.532 [2024-09-28 01:34:57.033958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:22:01.532 [2024-09-28 01:34:57.033965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:22:01.532 [2024-09-28 01:34:57.033971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.043623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.532 [2024-09-28 01:34:57.043648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:22:01.532 [2024-09-28 01:34:57.043655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.631 ms 00:22:01.532 [2024-09-28 01:34:57.043661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.043925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:01.532 [2024-09-28 01:34:57.043936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:22:01.532 [2024-09-28 01:34:57.043942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.250 ms 00:22:01.532 [2024-09-28 01:34:57.043948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.073001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.073031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:01.532 [2024-09-28 01:34:57.073039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.073045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.073069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.073075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:01.532 [2024-09-28 01:34:57.073081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.073087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.073148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.073156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:01.532 [2024-09-28 01:34:57.073162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.073168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.073181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.073187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:01.532 [2024-09-28 01:34:57.073206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.073212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.131521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.131688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:01.532 [2024-09-28 01:34:57.131702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.131708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.179556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.179592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:01.532 [2024-09-28 01:34:57.179600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.179607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.179675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.179686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:01.532 [2024-09-28 01:34:57.179693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.179699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.179730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.179738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:01.532 [2024-09-28 01:34:57.179743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.179750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.179819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.179828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:01.532 [2024-09-28 01:34:57.179834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.179840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.179863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.179869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:22:01.532 [2024-09-28 01:34:57.179875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.179881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.179910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.179916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:01.532 [2024-09-28 01:34:57.179925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.179930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.179965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:01.532 [2024-09-28 01:34:57.179973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:01.532 [2024-09-28 01:34:57.179978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:01.532 [2024-09-28 01:34:57.179984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:01.532 [2024-09-28 01:34:57.180075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7897.525 ms, result 0 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78032 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78032 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78032 ']' 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.717 01:35:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:05.717 [2024-09-28 01:35:01.165280] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:05.717 [2024-09-28 01:35:01.165539] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78032 ] 00:22:05.717 [2024-09-28 01:35:01.307704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.717 [2024-09-28 01:35:01.452423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.285 [2024-09-28 01:35:02.030878] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:06.285 [2024-09-28 01:35:02.030929] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:06.285 [2024-09-28 01:35:02.174142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.174328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:06.285 [2024-09-28 01:35:02.174346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:06.285 [2024-09-28 01:35:02.174356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.174409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.174420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:06.285 [2024-09-28 01:35:02.174428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:22:06.285 [2024-09-28 01:35:02.174435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.174461] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:06.285 [2024-09-28 01:35:02.175110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:06.285 [2024-09-28 01:35:02.175126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.175133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:06.285 [2024-09-28 01:35:02.175141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.675 ms 00:22:06.285 [2024-09-28 01:35:02.175151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.176205] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:22:06.285 [2024-09-28 01:35:02.188125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.188154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:22:06.285 [2024-09-28 01:35:02.188165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.933 ms 00:22:06.285 [2024-09-28 01:35:02.188173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.188239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.188249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:22:06.285 [2024-09-28 01:35:02.188257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:22:06.285 [2024-09-28 01:35:02.188265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.192795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.192829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:06.285 [2024-09-28 01:35:02.192839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.468 ms 00:22:06.285 [2024-09-28 01:35:02.192846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.192900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.192909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:06.285 [2024-09-28 01:35:02.192920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:22:06.285 [2024-09-28 01:35:02.192927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.192972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.192981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:06.285 [2024-09-28 01:35:02.192989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:22:06.285 [2024-09-28 01:35:02.192996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.193017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:06.285 [2024-09-28 01:35:02.196308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.196332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:06.285 [2024-09-28 01:35:02.196341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.295 ms 00:22:06.285 [2024-09-28 01:35:02.196348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.196375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.196383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:06.285 [2024-09-28 01:35:02.196394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:06.285 [2024-09-28 01:35:02.196401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.196422] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:22:06.285 [2024-09-28 01:35:02.196438] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:22:06.285 [2024-09-28 01:35:02.196472] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:22:06.285 [2024-09-28 01:35:02.196486] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:22:06.285 [2024-09-28 01:35:02.196587] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:06.285 [2024-09-28 01:35:02.196600] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:06.285 [2024-09-28 01:35:02.196610] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:06.285 [2024-09-28 01:35:02.196620] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:06.285 [2024-09-28 01:35:02.196628] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:06.285 [2024-09-28 01:35:02.196636] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:06.285 [2024-09-28 01:35:02.196643] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:06.285 [2024-09-28 01:35:02.196650] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:06.285 [2024-09-28 01:35:02.196661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:06.285 [2024-09-28 01:35:02.196669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.196676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:06.285 [2024-09-28 01:35:02.196684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:22:06.285 [2024-09-28 01:35:02.196693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.196777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.285 [2024-09-28 01:35:02.196785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:06.285 [2024-09-28 01:35:02.196792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:22:06.285 [2024-09-28 01:35:02.196799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.285 [2024-09-28 01:35:02.196905] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:06.285 [2024-09-28 01:35:02.196915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:06.285 [2024-09-28 01:35:02.196922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:06.285 [2024-09-28 01:35:02.196930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.285 [2024-09-28 01:35:02.196940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:06.285 [2024-09-28 01:35:02.196947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:06.285 [2024-09-28 01:35:02.196953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:06.285 [2024-09-28 01:35:02.196960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:06.285 [2024-09-28 01:35:02.196967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:06.285 [2024-09-28 01:35:02.196974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.285 [2024-09-28 01:35:02.196981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:06.286 [2024-09-28 01:35:02.196987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:06.286 [2024-09-28 01:35:02.196994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.286 [2024-09-28 01:35:02.197001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:06.286 [2024-09-28 01:35:02.197010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:06.286 [2024-09-28 01:35:02.197017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.286 [2024-09-28 01:35:02.197023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:06.286 [2024-09-28 01:35:02.197029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:06.286 [2024-09-28 01:35:02.197036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.286 [2024-09-28 01:35:02.197042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:06.286 [2024-09-28 01:35:02.197049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:06.286 [2024-09-28 01:35:02.197055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:06.286 [2024-09-28 01:35:02.197062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:06.286 [2024-09-28 01:35:02.197068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:06.286 [2024-09-28 01:35:02.197127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:06.286 [2024-09-28 01:35:02.197134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:06.286 [2024-09-28 01:35:02.197140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:06.286 [2024-09-28 01:35:02.197146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:06.286 [2024-09-28 01:35:02.197153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:06.286 [2024-09-28 01:35:02.197159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:06.286 [2024-09-28 01:35:02.197165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:06.286 [2024-09-28 01:35:02.197172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:06.286 [2024-09-28 01:35:02.197178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:06.286 [2024-09-28 01:35:02.197184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.286 [2024-09-28 01:35:02.197339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:06.286 [2024-09-28 01:35:02.197372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:06.286 [2024-09-28 01:35:02.197392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.286 [2024-09-28 01:35:02.197411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:06.286 [2024-09-28 01:35:02.197429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:06.286 [2024-09-28 01:35:02.197496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.286 [2024-09-28 01:35:02.197519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:06.286 [2024-09-28 01:35:02.197538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:06.286 [2024-09-28 01:35:02.197556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.286 [2024-09-28 01:35:02.197573] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:06.286 [2024-09-28 01:35:02.197621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:06.286 [2024-09-28 01:35:02.197643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:06.286 [2024-09-28 01:35:02.197695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:06.286 [2024-09-28 01:35:02.197717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:06.286 [2024-09-28 01:35:02.197736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:06.286 [2024-09-28 01:35:02.197781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:06.286 [2024-09-28 01:35:02.197802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:06.286 [2024-09-28 01:35:02.197850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:06.286 [2024-09-28 01:35:02.197872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:06.286 [2024-09-28 01:35:02.197917] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:06.286 [2024-09-28 01:35:02.197952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.197981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:06.286 [2024-09-28 01:35:02.198120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.198149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.198177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:06.286 [2024-09-28 01:35:02.198217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:06.286 [2024-09-28 01:35:02.198286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:06.286 [2024-09-28 01:35:02.198316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:06.286 [2024-09-28 01:35:02.198344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.198372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.198423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.198452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.198480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.198489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.198497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:06.286 [2024-09-28 01:35:02.198504] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:06.286 [2024-09-28 01:35:02.198512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.198520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.286 [2024-09-28 01:35:02.198528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:06.286 [2024-09-28 01:35:02.198534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:06.286 [2024-09-28 01:35:02.198542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:06.286 [2024-09-28 01:35:02.198550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:06.286 [2024-09-28 01:35:02.198557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:06.286 [2024-09-28 01:35:02.198570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.712 ms 00:22:06.286 [2024-09-28 01:35:02.198577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:06.286 [2024-09-28 01:35:02.198643] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:22:06.286 [2024-09-28 01:35:02.198655] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:22:08.185 [2024-09-28 01:35:04.103536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.185 [2024-09-28 01:35:04.103736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:22:08.185 [2024-09-28 01:35:04.103758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1904.884 ms 00:22:08.185 [2024-09-28 01:35:04.103775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.128372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.128416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:08.443 [2024-09-28 01:35:04.128429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.395 ms 00:22:08.443 [2024-09-28 01:35:04.128436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.128514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.128524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:08.443 [2024-09-28 01:35:04.128533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:22:08.443 [2024-09-28 01:35:04.128540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.178411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.178450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:08.443 [2024-09-28 01:35:04.178463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.834 ms 00:22:08.443 [2024-09-28 01:35:04.178471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.178507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.178516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:08.443 [2024-09-28 01:35:04.178525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:08.443 [2024-09-28 01:35:04.178532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.178885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.178901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:08.443 [2024-09-28 01:35:04.178910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:22:08.443 [2024-09-28 01:35:04.178917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.178954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.178963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:08.443 [2024-09-28 01:35:04.178970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:22:08.443 [2024-09-28 01:35:04.178978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.192085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.192117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:08.443 [2024-09-28 01:35:04.192126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.086 ms 00:22:08.443 [2024-09-28 01:35:04.192133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.204127] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:08.443 [2024-09-28 01:35:04.204161] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:22:08.443 [2024-09-28 01:35:04.204175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.204183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:22:08.443 [2024-09-28 01:35:04.204204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.923 ms 00:22:08.443 [2024-09-28 01:35:04.204212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.217880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.218006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:22:08.443 [2024-09-28 01:35:04.218022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.630 ms 00:22:08.443 [2024-09-28 01:35:04.218029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.229077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.229190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:22:08.443 [2024-09-28 01:35:04.229213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.008 ms 00:22:08.443 [2024-09-28 01:35:04.229220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.240235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.240263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:22:08.443 [2024-09-28 01:35:04.240272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.985 ms 00:22:08.443 [2024-09-28 01:35:04.240279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.240885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.240910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:08.443 [2024-09-28 01:35:04.240919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:22:08.443 [2024-09-28 01:35:04.240927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.294326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.294371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:22:08.443 [2024-09-28 01:35:04.294383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.380 ms 00:22:08.443 [2024-09-28 01:35:04.294391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.304665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:08.443 [2024-09-28 01:35:04.305372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.305399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:08.443 [2024-09-28 01:35:04.305413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.937 ms 00:22:08.443 [2024-09-28 01:35:04.305420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.443 [2024-09-28 01:35:04.305503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.443 [2024-09-28 01:35:04.305514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:22:08.443 [2024-09-28 01:35:04.305523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:22:08.444 [2024-09-28 01:35:04.305530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.444 [2024-09-28 01:35:04.305583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.444 [2024-09-28 01:35:04.305593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:08.444 [2024-09-28 01:35:04.305602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:22:08.444 [2024-09-28 01:35:04.305612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.444 [2024-09-28 01:35:04.305631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.444 [2024-09-28 01:35:04.305639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:08.444 [2024-09-28 01:35:04.305647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:08.444 [2024-09-28 01:35:04.305654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.444 [2024-09-28 01:35:04.305684] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:22:08.444 [2024-09-28 01:35:04.305693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.444 [2024-09-28 01:35:04.305700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:22:08.444 [2024-09-28 01:35:04.305708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:22:08.444 [2024-09-28 01:35:04.305716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.444 [2024-09-28 01:35:04.328110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.444 [2024-09-28 01:35:04.328254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:08.444 [2024-09-28 01:35:04.328271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.369 ms 00:22:08.444 [2024-09-28 01:35:04.328281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.444 [2024-09-28 01:35:04.328407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.444 [2024-09-28 01:35:04.328424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:08.444 [2024-09-28 01:35:04.328433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:22:08.444 [2024-09-28 01:35:04.328443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.444 [2024-09-28 01:35:04.329713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2155.167 ms, result 0 00:22:08.444 [2024-09-28 01:35:04.344626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.444 [2024-09-28 01:35:04.360605] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:08.444 [2024-09-28 01:35:04.368714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:08.703 01:35:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.703 01:35:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:08.703 01:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:08.703 01:35:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:22:08.703 01:35:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:08.703 [2024-09-28 01:35:04.592847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.703 [2024-09-28 01:35:04.592892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:08.703 [2024-09-28 01:35:04.592905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:22:08.703 [2024-09-28 01:35:04.592913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.703 [2024-09-28 01:35:04.592936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.703 [2024-09-28 01:35:04.592945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:08.703 [2024-09-28 01:35:04.592952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:08.703 [2024-09-28 01:35:04.592960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.703 [2024-09-28 01:35:04.592979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:08.703 [2024-09-28 01:35:04.592990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:08.703 [2024-09-28 01:35:04.592998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:08.703 [2024-09-28 01:35:04.593005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:08.703 [2024-09-28 01:35:04.593061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.214 ms, result 0 00:22:08.703 true 00:22:08.703 01:35:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:08.962 { 00:22:08.962 "name": "ftl", 00:22:08.962 "properties": [ 00:22:08.962 { 00:22:08.962 "name": "superblock_version", 00:22:08.962 "value": 5, 00:22:08.962 "read-only": true 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "name": "base_device", 00:22:08.962 "bands": [ 00:22:08.962 { 00:22:08.962 "id": 0, 00:22:08.962 "state": "CLOSED", 00:22:08.962 "validity": 1.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 1, 00:22:08.962 "state": "CLOSED", 00:22:08.962 "validity": 1.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 2, 00:22:08.962 "state": "CLOSED", 00:22:08.962 "validity": 0.007843137254901933 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 3, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 4, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 5, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 6, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 7, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 8, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 9, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 10, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 11, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 12, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 13, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 14, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 15, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 16, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 17, 00:22:08.962 "state": "FREE", 00:22:08.962 "validity": 0.0 00:22:08.962 } 00:22:08.962 ], 00:22:08.962 "read-only": true 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "name": "cache_device", 00:22:08.962 "type": "bdev", 00:22:08.962 "chunks": [ 00:22:08.962 { 00:22:08.962 "id": 0, 00:22:08.962 "state": "INACTIVE", 00:22:08.962 "utilization": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 1, 00:22:08.962 "state": "OPEN", 00:22:08.962 "utilization": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 2, 00:22:08.962 "state": "OPEN", 00:22:08.962 "utilization": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 3, 00:22:08.962 "state": "FREE", 00:22:08.962 "utilization": 0.0 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "id": 4, 00:22:08.962 "state": "FREE", 00:22:08.962 "utilization": 0.0 00:22:08.962 } 00:22:08.962 ], 00:22:08.962 "read-only": true 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "name": "verbose_mode", 00:22:08.962 "value": true, 00:22:08.962 "unit": "", 00:22:08.962 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:08.962 }, 00:22:08.962 { 00:22:08.962 "name": "prep_upgrade_on_shutdown", 00:22:08.962 "value": false, 00:22:08.962 "unit": "", 00:22:08.962 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:08.962 } 00:22:08.962 ] 00:22:08.962 } 00:22:08.962 01:35:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:22:08.962 01:35:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:08.962 01:35:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:09.221 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:22:09.221 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:22:09.221 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:22:09.221 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:22:09.221 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:09.480 Validate MD5 checksum, iteration 1 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:09.480 01:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:09.480 [2024-09-28 01:35:05.284842] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:09.480 [2024-09-28 01:35:05.285116] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78093 ] 00:22:09.738 [2024-09-28 01:35:05.434142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.738 [2024-09-28 01:35:05.610888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.911  Copying: 694/1024 [MB] (694 MBps) Copying: 1024/1024 [MB] (average 693 MBps) 00:22:12.911 00:22:12.911 01:35:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:22:12.911 01:35:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:14.813 Validate MD5 checksum, iteration 2 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1361753c6b6330eaa5102e090bdb69c8 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1361753c6b6330eaa5102e090bdb69c8 != \1\3\6\1\7\5\3\c\6\b\6\3\3\0\e\a\a\5\1\0\2\e\0\9\0\b\d\b\6\9\c\8 ]] 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:14.813 01:35:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:14.813 [2024-09-28 01:35:10.314227] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:14.813 [2024-09-28 01:35:10.314342] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78149 ] 00:22:14.813 [2024-09-28 01:35:10.454935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.813 [2024-09-28 01:35:10.599978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.383  Copying: 790/1024 [MB] (790 MBps) Copying: 1024/1024 [MB] (average 768 MBps) 00:22:17.383 00:22:17.383 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:22:17.383 01:35:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ba5bf52c52e14a8ba08a3df713ac5a25 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ba5bf52c52e14a8ba08a3df713ac5a25 != \b\a\5\b\f\5\2\c\5\2\e\1\4\a\8\b\a\0\8\a\3\d\f\7\1\3\a\c\5\a\2\5 ]] 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 78032 ]] 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 78032 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:22:19.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78209 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78209 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78209 ']' 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:19.913 01:35:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:19.913 [2024-09-28 01:35:15.388319] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:19.913 [2024-09-28 01:35:15.388913] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78209 ] 00:22:19.913 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 78032 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:22:19.913 [2024-09-28 01:35:15.537887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.913 [2024-09-28 01:35:15.682317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.478 [2024-09-28 01:35:16.259544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:20.478 [2024-09-28 01:35:16.259597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:20.478 [2024-09-28 01:35:16.402765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.478 [2024-09-28 01:35:16.402805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:20.478 [2024-09-28 01:35:16.402816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:20.478 [2024-09-28 01:35:16.402823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.478 [2024-09-28 01:35:16.402862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.478 [2024-09-28 01:35:16.402870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:20.478 [2024-09-28 01:35:16.402877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:22:20.478 [2024-09-28 01:35:16.402883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.478 [2024-09-28 01:35:16.402902] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:20.478 [2024-09-28 01:35:16.403519] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:20.478 [2024-09-28 01:35:16.403533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.478 [2024-09-28 01:35:16.403540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:20.478 [2024-09-28 01:35:16.403547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.638 ms 00:22:20.478 [2024-09-28 01:35:16.403555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.478 [2024-09-28 01:35:16.403788] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:22:20.738 [2024-09-28 01:35:16.416183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.738 [2024-09-28 01:35:16.416220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:22:20.738 [2024-09-28 01:35:16.416235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.395 ms 00:22:20.738 [2024-09-28 01:35:16.416241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.738 [2024-09-28 01:35:16.423138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.738 [2024-09-28 01:35:16.423164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:22:20.738 [2024-09-28 01:35:16.423172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:22:20.738 [2024-09-28 01:35:16.423178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.738 [2024-09-28 01:35:16.423440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.738 [2024-09-28 01:35:16.423450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:20.738 [2024-09-28 01:35:16.423458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:22:20.738 [2024-09-28 01:35:16.423463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.738 [2024-09-28 01:35:16.423501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.738 [2024-09-28 01:35:16.423508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:20.738 [2024-09-28 01:35:16.423515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:22:20.738 [2024-09-28 01:35:16.423521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.738 [2024-09-28 01:35:16.423543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.738 [2024-09-28 01:35:16.423550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:20.738 [2024-09-28 01:35:16.423558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:22:20.738 [2024-09-28 01:35:16.423564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.738 [2024-09-28 01:35:16.423580] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:20.738 [2024-09-28 01:35:16.425855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.738 [2024-09-28 01:35:16.425877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:20.738 [2024-09-28 01:35:16.425885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.279 ms 00:22:20.738 [2024-09-28 01:35:16.425891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.738 [2024-09-28 01:35:16.425914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.738 [2024-09-28 01:35:16.425921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:20.738 [2024-09-28 01:35:16.425927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:20.738 [2024-09-28 01:35:16.425933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.738 [2024-09-28 01:35:16.425949] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:22:20.738 [2024-09-28 01:35:16.425962] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:22:20.738 [2024-09-28 01:35:16.425991] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:22:20.738 [2024-09-28 01:35:16.426003] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:22:20.738 [2024-09-28 01:35:16.426082] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:20.738 [2024-09-28 01:35:16.426091] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:20.738 [2024-09-28 01:35:16.426099] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:20.738 [2024-09-28 01:35:16.426107] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:20.738 [2024-09-28 01:35:16.426114] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:20.738 [2024-09-28 01:35:16.426120] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:20.738 [2024-09-28 01:35:16.426128] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:20.738 [2024-09-28 01:35:16.426134] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:20.738 [2024-09-28 01:35:16.426139] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:20.738 [2024-09-28 01:35:16.426145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.738 [2024-09-28 01:35:16.426151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:20.738 [2024-09-28 01:35:16.426157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:22:20.738 [2024-09-28 01:35:16.426163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.738 [2024-09-28 01:35:16.426241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.738 [2024-09-28 01:35:16.426249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:20.738 [2024-09-28 01:35:16.426255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:22:20.738 [2024-09-28 01:35:16.426263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.738 [2024-09-28 01:35:16.426343] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:20.739 [2024-09-28 01:35:16.426351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:20.739 [2024-09-28 01:35:16.426358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:20.739 [2024-09-28 01:35:16.426364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:20.739 [2024-09-28 01:35:16.426375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:20.739 [2024-09-28 01:35:16.426387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:20.739 [2024-09-28 01:35:16.426392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:20.739 [2024-09-28 01:35:16.426397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:20.739 [2024-09-28 01:35:16.426407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:20.739 [2024-09-28 01:35:16.426413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:20.739 [2024-09-28 01:35:16.426423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:20.739 [2024-09-28 01:35:16.426429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:20.739 [2024-09-28 01:35:16.426440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:20.739 [2024-09-28 01:35:16.426444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:20.739 [2024-09-28 01:35:16.426455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:20.739 [2024-09-28 01:35:16.426460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:20.739 [2024-09-28 01:35:16.426470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:20.739 [2024-09-28 01:35:16.426475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:20.739 [2024-09-28 01:35:16.426480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:20.739 [2024-09-28 01:35:16.426485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:20.739 [2024-09-28 01:35:16.426489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:20.739 [2024-09-28 01:35:16.426495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:20.739 [2024-09-28 01:35:16.426500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:20.739 [2024-09-28 01:35:16.426505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:20.739 [2024-09-28 01:35:16.426510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:20.739 [2024-09-28 01:35:16.426515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:20.739 [2024-09-28 01:35:16.426520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:20.739 [2024-09-28 01:35:16.426525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:20.739 [2024-09-28 01:35:16.426535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:20.739 [2024-09-28 01:35:16.426540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:20.739 [2024-09-28 01:35:16.426550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:20.739 [2024-09-28 01:35:16.426566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:20.739 [2024-09-28 01:35:16.426571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426576] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:20.739 [2024-09-28 01:35:16.426583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:20.739 [2024-09-28 01:35:16.426589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:20.739 [2024-09-28 01:35:16.426594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.739 [2024-09-28 01:35:16.426602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:20.739 [2024-09-28 01:35:16.426607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:20.739 [2024-09-28 01:35:16.426613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:20.739 [2024-09-28 01:35:16.426618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:20.739 [2024-09-28 01:35:16.426624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:20.739 [2024-09-28 01:35:16.426629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:20.739 [2024-09-28 01:35:16.426635] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:20.739 [2024-09-28 01:35:16.426642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:20.739 [2024-09-28 01:35:16.426654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:20.739 [2024-09-28 01:35:16.426670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:20.739 [2024-09-28 01:35:16.426675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:20.739 [2024-09-28 01:35:16.426681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:20.739 [2024-09-28 01:35:16.426686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:20.739 [2024-09-28 01:35:16.426726] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:20.739 [2024-09-28 01:35:16.426732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:20.739 [2024-09-28 01:35:16.426745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:20.739 [2024-09-28 01:35:16.426750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:20.739 [2024-09-28 01:35:16.426756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:20.739 [2024-09-28 01:35:16.426761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.739 [2024-09-28 01:35:16.426772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:20.739 [2024-09-28 01:35:16.426778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.472 ms 00:22:20.739 [2024-09-28 01:35:16.426783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.739 [2024-09-28 01:35:16.445885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.739 [2024-09-28 01:35:16.445912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:20.739 [2024-09-28 01:35:16.445922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.063 ms 00:22:20.739 [2024-09-28 01:35:16.445928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.739 [2024-09-28 01:35:16.445958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.739 [2024-09-28 01:35:16.445964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:20.739 [2024-09-28 01:35:16.445973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:22:20.739 [2024-09-28 01:35:16.445979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.739 [2024-09-28 01:35:16.486467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.739 [2024-09-28 01:35:16.486501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:20.739 [2024-09-28 01:35:16.486510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.441 ms 00:22:20.739 [2024-09-28 01:35:16.486517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.739 [2024-09-28 01:35:16.486549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.739 [2024-09-28 01:35:16.486556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:20.739 [2024-09-28 01:35:16.486563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:20.739 [2024-09-28 01:35:16.486569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.739 [2024-09-28 01:35:16.486651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.739 [2024-09-28 01:35:16.486659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:20.739 [2024-09-28 01:35:16.486666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:22:20.739 [2024-09-28 01:35:16.486673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.739 [2024-09-28 01:35:16.486704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.739 [2024-09-28 01:35:16.486714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:20.739 [2024-09-28 01:35:16.486720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:22:20.739 [2024-09-28 01:35:16.486727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.740 [2024-09-28 01:35:16.497530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.740 [2024-09-28 01:35:16.497556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:20.740 [2024-09-28 01:35:16.497563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.787 ms 00:22:20.740 [2024-09-28 01:35:16.497569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.740 [2024-09-28 01:35:16.497658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.740 [2024-09-28 01:35:16.497667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:22:20.740 [2024-09-28 01:35:16.497673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:20.740 [2024-09-28 01:35:16.497679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.740 [2024-09-28 01:35:16.509916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.740 [2024-09-28 01:35:16.509943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:22:20.740 [2024-09-28 01:35:16.509951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.220 ms 00:22:20.740 [2024-09-28 01:35:16.509960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.740 [2024-09-28 01:35:16.517003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.740 [2024-09-28 01:35:16.517028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:20.740 [2024-09-28 01:35:16.517035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.411 ms 00:22:20.740 [2024-09-28 01:35:16.517041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.740 [2024-09-28 01:35:16.559416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.740 [2024-09-28 01:35:16.559463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:22:20.740 [2024-09-28 01:35:16.559474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.333 ms 00:22:20.740 [2024-09-28 01:35:16.559480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.740 [2024-09-28 01:35:16.559592] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:22:20.740 [2024-09-28 01:35:16.559669] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:22:20.740 [2024-09-28 01:35:16.559739] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:22:20.740 [2024-09-28 01:35:16.559808] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:22:20.740 [2024-09-28 01:35:16.559819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.740 [2024-09-28 01:35:16.559826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:22:20.740 [2024-09-28 01:35:16.559836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:22:20.740 [2024-09-28 01:35:16.559842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.740 [2024-09-28 01:35:16.559885] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:22:20.740 [2024-09-28 01:35:16.559894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.740 [2024-09-28 01:35:16.559900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:22:20.740 [2024-09-28 01:35:16.559907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:22:20.740 [2024-09-28 01:35:16.559913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.740 [2024-09-28 01:35:16.570894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.740 [2024-09-28 01:35:16.570921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:22:20.740 [2024-09-28 01:35:16.570930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.965 ms 00:22:20.740 [2024-09-28 01:35:16.570936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.740 [2024-09-28 01:35:16.577357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.740 [2024-09-28 01:35:16.577382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:22:20.740 [2024-09-28 01:35:16.577390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:22:20.740 [2024-09-28 01:35:16.577399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.740 [2024-09-28 01:35:16.577459] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:22:20.740 [2024-09-28 01:35:16.577572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.740 [2024-09-28 01:35:16.577580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:22:20.740 [2024-09-28 01:35:16.577587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.114 ms 00:22:20.740 [2024-09-28 01:35:16.577593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.306 [2024-09-28 01:35:16.972114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.306 [2024-09-28 01:35:16.972173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:22:21.306 [2024-09-28 01:35:16.972186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 393.844 ms 00:22:21.306 [2024-09-28 01:35:16.972205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.306 [2024-09-28 01:35:16.975629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.306 [2024-09-28 01:35:16.975665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:22:21.306 [2024-09-28 01:35:16.975674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.869 ms 00:22:21.306 [2024-09-28 01:35:16.975680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.306 [2024-09-28 01:35:16.975988] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:22:21.306 [2024-09-28 01:35:16.976012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.306 [2024-09-28 01:35:16.976019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:22:21.306 [2024-09-28 01:35:16.976026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:22:21.306 [2024-09-28 01:35:16.976032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.306 [2024-09-28 01:35:16.976060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.306 [2024-09-28 01:35:16.976071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:22:21.306 [2024-09-28 01:35:16.976078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:21.306 [2024-09-28 01:35:16.976084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.306 [2024-09-28 01:35:16.976111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 398.651 ms, result 0 00:22:21.306 [2024-09-28 01:35:16.976141] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:22:21.306 [2024-09-28 01:35:16.976244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.306 [2024-09-28 01:35:16.976254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:22:21.306 [2024-09-28 01:35:16.976261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.103 ms 00:22:21.306 [2024-09-28 01:35:16.976266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.565 [2024-09-28 01:35:17.380479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.565 [2024-09-28 01:35:17.380542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:22:21.565 [2024-09-28 01:35:17.380555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 403.377 ms 00:22:21.565 [2024-09-28 01:35:17.380562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.565 [2024-09-28 01:35:17.384608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.565 [2024-09-28 01:35:17.384646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:22:21.565 [2024-09-28 01:35:17.384657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.910 ms 00:22:21.565 [2024-09-28 01:35:17.384665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.385020] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:22:21.566 [2024-09-28 01:35:17.385063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.385071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:22:21.566 [2024-09-28 01:35:17.385079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.373 ms 00:22:21.566 [2024-09-28 01:35:17.385086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.385116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.385125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:22:21.566 [2024-09-28 01:35:17.385133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:21.566 [2024-09-28 01:35:17.385140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.385173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 409.025 ms, result 0 00:22:21.566 [2024-09-28 01:35:17.385227] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:21.566 [2024-09-28 01:35:17.385238] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:22:21.566 [2024-09-28 01:35:17.385247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.385255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:22:21.566 [2024-09-28 01:35:17.385266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 807.800 ms 00:22:21.566 [2024-09-28 01:35:17.385274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.385302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.385311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:22:21.566 [2024-09-28 01:35:17.385319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:21.566 [2024-09-28 01:35:17.385327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.400776] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:21.566 [2024-09-28 01:35:17.400886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.400897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:21.566 [2024-09-28 01:35:17.400907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.545 ms 00:22:21.566 [2024-09-28 01:35:17.400914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.401614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.401639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:22:21.566 [2024-09-28 01:35:17.401649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.620 ms 00:22:21.566 [2024-09-28 01:35:17.401656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.403908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.403931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:22:21.566 [2024-09-28 01:35:17.403941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.235 ms 00:22:21.566 [2024-09-28 01:35:17.403949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.403990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.403999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:22:21.566 [2024-09-28 01:35:17.404007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:21.566 [2024-09-28 01:35:17.404014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.404116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.404125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:21.566 [2024-09-28 01:35:17.404133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:22:21.566 [2024-09-28 01:35:17.404141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.404160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.404170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:21.566 [2024-09-28 01:35:17.404177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:21.566 [2024-09-28 01:35:17.404185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.404223] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:22:21.566 [2024-09-28 01:35:17.404233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.404240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:22:21.566 [2024-09-28 01:35:17.404248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:22:21.566 [2024-09-28 01:35:17.404255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.404304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.566 [2024-09-28 01:35:17.404315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:21.566 [2024-09-28 01:35:17.404325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:22:21.566 [2024-09-28 01:35:17.404331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.566 [2024-09-28 01:35:17.405222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1001.995 ms, result 0 00:22:21.566 [2024-09-28 01:35:17.417580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.566 [2024-09-28 01:35:17.433565] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:21.566 [2024-09-28 01:35:17.441670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:22.133 Validate MD5 checksum, iteration 1 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:22.133 01:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:22.133 [2024-09-28 01:35:17.886057] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:22.133 [2024-09-28 01:35:17.886372] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78245 ] 00:22:22.133 [2024-09-28 01:35:18.036942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.391 [2024-09-28 01:35:18.214677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.223  Copying: 741/1024 [MB] (741 MBps) Copying: 1024/1024 [MB] (average 720 MBps) 00:22:25.223 00:22:25.223 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:22:25.223 01:35:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1361753c6b6330eaa5102e090bdb69c8 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1361753c6b6330eaa5102e090bdb69c8 != \1\3\6\1\7\5\3\c\6\b\6\3\3\0\e\a\a\5\1\0\2\e\0\9\0\b\d\b\6\9\c\8 ]] 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:22:27.750 Validate MD5 checksum, iteration 2 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:27.750 01:35:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:27.750 [2024-09-28 01:35:23.245405] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:27.750 [2024-09-28 01:35:23.245519] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78305 ] 00:22:27.750 [2024-09-28 01:35:23.391884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.750 [2024-09-28 01:35:23.535929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.624  Copying: 739/1024 [MB] (739 MBps) Copying: 1024/1024 [MB] (average 722 MBps) 00:22:30.624 00:22:30.624 01:35:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:22:30.624 01:35:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ba5bf52c52e14a8ba08a3df713ac5a25 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ba5bf52c52e14a8ba08a3df713ac5a25 != \b\a\5\b\f\5\2\c\5\2\e\1\4\a\8\b\a\0\8\a\3\d\f\7\1\3\a\c\5\a\2\5 ]] 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78209 ]] 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78209 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78209 ']' 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78209 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78209 00:22:32.528 killing process with pid 78209 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78209' 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78209 00:22:32.528 01:35:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78209 00:22:33.095 [2024-09-28 01:35:28.961718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:22:33.095 [2024-09-28 01:35:28.972485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.095 [2024-09-28 01:35:28.972517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:22:33.095 [2024-09-28 01:35:28.972527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:33.095 [2024-09-28 01:35:28.972536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:28.972553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:22:33.096 [2024-09-28 01:35:28.974609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:28.974631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:22:33.096 [2024-09-28 01:35:28.974640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.045 ms 00:22:33.096 [2024-09-28 01:35:28.974647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:28.974840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:28.974848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:22:33.096 [2024-09-28 01:35:28.974855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.176 ms 00:22:33.096 [2024-09-28 01:35:28.974860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:28.975851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:28.975957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:22:33.096 [2024-09-28 01:35:28.975969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.979 ms 00:22:33.096 [2024-09-28 01:35:28.975975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:28.976859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:28.976875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:22:33.096 [2024-09-28 01:35:28.976883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.860 ms 00:22:33.096 [2024-09-28 01:35:28.976889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:28.983805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:28.983831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:22:33.096 [2024-09-28 01:35:28.983839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.883 ms 00:22:33.096 [2024-09-28 01:35:28.983845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:28.987809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:28.987835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:22:33.096 [2024-09-28 01:35:28.987843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.936 ms 00:22:33.096 [2024-09-28 01:35:28.987849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:28.987913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:28.987925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:22:33.096 [2024-09-28 01:35:28.987932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:22:33.096 [2024-09-28 01:35:28.987937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:28.995202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:28.995226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:22:33.096 [2024-09-28 01:35:28.995234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.252 ms 00:22:33.096 [2024-09-28 01:35:28.995239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:29.002467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:29.002599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:22:33.096 [2024-09-28 01:35:29.002610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.203 ms 00:22:33.096 [2024-09-28 01:35:29.002616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:29.009646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:29.009734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:22:33.096 [2024-09-28 01:35:29.009745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.007 ms 00:22:33.096 [2024-09-28 01:35:29.009751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:29.016492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:29.016584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:22:33.096 [2024-09-28 01:35:29.016595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.690 ms 00:22:33.096 [2024-09-28 01:35:29.016601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:29.016624] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:22:33.096 [2024-09-28 01:35:29.016634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:33.096 [2024-09-28 01:35:29.016642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:22:33.096 [2024-09-28 01:35:29.016648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:22:33.096 [2024-09-28 01:35:29.016655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:33.096 [2024-09-28 01:35:29.016742] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:22:33.096 [2024-09-28 01:35:29.016748] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f9de8392-050f-423e-8d61-7e1879835680 00:22:33.096 [2024-09-28 01:35:29.016755] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:22:33.096 [2024-09-28 01:35:29.016760] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:22:33.096 [2024-09-28 01:35:29.016765] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:22:33.096 [2024-09-28 01:35:29.016774] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:22:33.096 [2024-09-28 01:35:29.016780] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:22:33.096 [2024-09-28 01:35:29.016786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:22:33.096 [2024-09-28 01:35:29.016791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:22:33.096 [2024-09-28 01:35:29.016796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:22:33.096 [2024-09-28 01:35:29.016801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:22:33.096 [2024-09-28 01:35:29.016806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:29.016816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:22:33.096 [2024-09-28 01:35:29.016832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:22:33.096 [2024-09-28 01:35:29.016838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:29.026279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:29.026305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:22:33.096 [2024-09-28 01:35:29.026313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.427 ms 00:22:33.096 [2024-09-28 01:35:29.026319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.096 [2024-09-28 01:35:29.026584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:33.096 [2024-09-28 01:35:29.026591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:22:33.096 [2024-09-28 01:35:29.026597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:22:33.096 [2024-09-28 01:35:29.026603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.355 [2024-09-28 01:35:29.055617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.355 [2024-09-28 01:35:29.055728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:33.355 [2024-09-28 01:35:29.055740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.355 [2024-09-28 01:35:29.055746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.355 [2024-09-28 01:35:29.055772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.355 [2024-09-28 01:35:29.055778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:33.355 [2024-09-28 01:35:29.055784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.355 [2024-09-28 01:35:29.055790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.355 [2024-09-28 01:35:29.055843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.355 [2024-09-28 01:35:29.055851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:33.355 [2024-09-28 01:35:29.055860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.355 [2024-09-28 01:35:29.055866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.355 [2024-09-28 01:35:29.055880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.355 [2024-09-28 01:35:29.055885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:33.355 [2024-09-28 01:35:29.055891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.355 [2024-09-28 01:35:29.055897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.355 [2024-09-28 01:35:29.114323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.355 [2024-09-28 01:35:29.114464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:33.355 [2024-09-28 01:35:29.114478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.355 [2024-09-28 01:35:29.114484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.355 [2024-09-28 01:35:29.162581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.355 [2024-09-28 01:35:29.162620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:33.355 [2024-09-28 01:35:29.162629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.355 [2024-09-28 01:35:29.162636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.355 [2024-09-28 01:35:29.162704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.355 [2024-09-28 01:35:29.162712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:33.355 [2024-09-28 01:35:29.162718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.355 [2024-09-28 01:35:29.162728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.355 [2024-09-28 01:35:29.162760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.355 [2024-09-28 01:35:29.162767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:33.355 [2024-09-28 01:35:29.162776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.355 [2024-09-28 01:35:29.162782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.355 [2024-09-28 01:35:29.162849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.355 [2024-09-28 01:35:29.162856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:33.355 [2024-09-28 01:35:29.162863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.355 [2024-09-28 01:35:29.162868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.355 [2024-09-28 01:35:29.162899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.355 [2024-09-28 01:35:29.162906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:22:33.356 [2024-09-28 01:35:29.162912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.356 [2024-09-28 01:35:29.162918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.356 [2024-09-28 01:35:29.162946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.356 [2024-09-28 01:35:29.162952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:33.356 [2024-09-28 01:35:29.162958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.356 [2024-09-28 01:35:29.162964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.356 [2024-09-28 01:35:29.162998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:33.356 [2024-09-28 01:35:29.163005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:33.356 [2024-09-28 01:35:29.163011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:33.356 [2024-09-28 01:35:29.163017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:33.356 [2024-09-28 01:35:29.163106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 190.599 ms, result 0 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:34.292 Remove shared memory files 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78032 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:22:34.292 ************************************ 00:22:34.292 END TEST ftl_upgrade_shutdown 00:22:34.292 ************************************ 00:22:34.292 00:22:34.292 real 1m14.385s 00:22:34.292 user 1m44.489s 00:22:34.292 sys 0m16.714s 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.292 01:35:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:34.292 01:35:29 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:22:34.292 01:35:29 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:22:34.292 01:35:29 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:22:34.292 01:35:29 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.292 01:35:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:34.292 ************************************ 00:22:34.292 START TEST ftl_restore_fast 00:22:34.292 ************************************ 00:22:34.292 01:35:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:22:34.292 * Looking for test storage... 00:22:34.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:34.292 01:35:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:34.292 01:35:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:22:34.292 01:35:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:34.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.292 --rc genhtml_branch_coverage=1 00:22:34.292 --rc genhtml_function_coverage=1 00:22:34.292 --rc genhtml_legend=1 00:22:34.292 --rc geninfo_all_blocks=1 00:22:34.292 --rc geninfo_unexecuted_blocks=1 00:22:34.292 00:22:34.292 ' 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:34.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.292 --rc genhtml_branch_coverage=1 00:22:34.292 --rc genhtml_function_coverage=1 00:22:34.292 --rc genhtml_legend=1 00:22:34.292 --rc geninfo_all_blocks=1 00:22:34.292 --rc geninfo_unexecuted_blocks=1 00:22:34.292 00:22:34.292 ' 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:34.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.292 --rc genhtml_branch_coverage=1 00:22:34.292 --rc genhtml_function_coverage=1 00:22:34.292 --rc genhtml_legend=1 00:22:34.292 --rc geninfo_all_blocks=1 00:22:34.292 --rc geninfo_unexecuted_blocks=1 00:22:34.292 00:22:34.292 ' 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:34.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.292 --rc genhtml_branch_coverage=1 00:22:34.292 --rc genhtml_function_coverage=1 00:22:34.292 --rc genhtml_legend=1 00:22:34.292 --rc geninfo_all_blocks=1 00:22:34.292 --rc geninfo_unexecuted_blocks=1 00:22:34.292 00:22:34.292 ' 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:34.292 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.z0503LzseE 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=78455 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 78455 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 78455 ']' 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:34.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.293 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:22:34.293 [2024-09-28 01:35:30.155800] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:34.293 [2024-09-28 01:35:30.155985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78455 ] 00:22:34.552 [2024-09-28 01:35:30.298573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.552 [2024-09-28 01:35:30.440087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.117 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.117 01:35:30 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:22:35.117 01:35:30 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:35.117 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:22:35.117 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:35.117 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:22:35.117 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:22:35.117 01:35:30 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:35.376 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:35.376 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:22:35.376 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:35.376 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:35.376 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:35.376 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:22:35.376 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:22:35.376 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:35.634 { 00:22:35.634 "name": "nvme0n1", 00:22:35.634 "aliases": [ 00:22:35.634 "b7da6e97-a16a-413d-ad97-73c50d258652" 00:22:35.634 ], 00:22:35.634 "product_name": "NVMe disk", 00:22:35.634 "block_size": 4096, 00:22:35.634 "num_blocks": 1310720, 00:22:35.634 "uuid": "b7da6e97-a16a-413d-ad97-73c50d258652", 00:22:35.634 "numa_id": -1, 00:22:35.634 "assigned_rate_limits": { 00:22:35.634 "rw_ios_per_sec": 0, 00:22:35.634 "rw_mbytes_per_sec": 0, 00:22:35.634 "r_mbytes_per_sec": 0, 00:22:35.634 "w_mbytes_per_sec": 0 00:22:35.634 }, 00:22:35.634 "claimed": true, 00:22:35.634 "claim_type": "read_many_write_one", 00:22:35.634 "zoned": false, 00:22:35.634 "supported_io_types": { 00:22:35.634 "read": true, 00:22:35.634 "write": true, 00:22:35.634 "unmap": true, 00:22:35.634 "flush": true, 00:22:35.634 "reset": true, 00:22:35.634 "nvme_admin": true, 00:22:35.634 "nvme_io": true, 00:22:35.634 "nvme_io_md": false, 00:22:35.634 "write_zeroes": true, 00:22:35.634 "zcopy": false, 00:22:35.634 "get_zone_info": false, 00:22:35.634 "zone_management": false, 00:22:35.634 "zone_append": false, 00:22:35.634 "compare": true, 00:22:35.634 "compare_and_write": false, 00:22:35.634 "abort": true, 00:22:35.634 "seek_hole": false, 00:22:35.634 "seek_data": false, 00:22:35.634 "copy": true, 00:22:35.634 "nvme_iov_md": false 00:22:35.634 }, 00:22:35.634 "driver_specific": { 00:22:35.634 "nvme": [ 00:22:35.634 { 00:22:35.634 "pci_address": "0000:00:11.0", 00:22:35.634 "trid": { 00:22:35.634 "trtype": "PCIe", 00:22:35.634 "traddr": "0000:00:11.0" 00:22:35.634 }, 00:22:35.634 "ctrlr_data": { 00:22:35.634 "cntlid": 0, 00:22:35.634 "vendor_id": "0x1b36", 00:22:35.634 "model_number": "QEMU NVMe Ctrl", 00:22:35.634 "serial_number": "12341", 00:22:35.634 "firmware_revision": "8.0.0", 00:22:35.634 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:35.634 "oacs": { 00:22:35.634 "security": 0, 00:22:35.634 "format": 1, 00:22:35.634 "firmware": 0, 00:22:35.634 "ns_manage": 1 00:22:35.634 }, 00:22:35.634 "multi_ctrlr": false, 00:22:35.634 "ana_reporting": false 00:22:35.634 }, 00:22:35.634 "vs": { 00:22:35.634 "nvme_version": "1.4" 00:22:35.634 }, 00:22:35.634 "ns_data": { 00:22:35.634 "id": 1, 00:22:35.634 "can_share": false 00:22:35.634 } 00:22:35.634 } 00:22:35.634 ], 00:22:35.634 "mp_policy": "active_passive" 00:22:35.634 } 00:22:35.634 } 00:22:35.634 ]' 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:35.634 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:35.894 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=90db5943-eb6a-4f09-a3f0-92863c618a7e 00:22:35.894 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:22:35.894 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90db5943-eb6a-4f09-a3f0-92863c618a7e 00:22:36.155 01:35:31 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=cfbf7c05-ab70-41fc-9b4c-8633e89a3b56 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cfbf7c05-ab70-41fc-9b4c-8633e89a3b56 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:22:36.413 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:36.672 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:36.672 { 00:22:36.672 "name": "84af244f-3f0e-4b66-9fff-34d11b73376c", 00:22:36.672 "aliases": [ 00:22:36.672 "lvs/nvme0n1p0" 00:22:36.672 ], 00:22:36.672 "product_name": "Logical Volume", 00:22:36.672 "block_size": 4096, 00:22:36.672 "num_blocks": 26476544, 00:22:36.672 "uuid": "84af244f-3f0e-4b66-9fff-34d11b73376c", 00:22:36.672 "assigned_rate_limits": { 00:22:36.672 "rw_ios_per_sec": 0, 00:22:36.672 "rw_mbytes_per_sec": 0, 00:22:36.672 "r_mbytes_per_sec": 0, 00:22:36.672 "w_mbytes_per_sec": 0 00:22:36.672 }, 00:22:36.672 "claimed": false, 00:22:36.672 "zoned": false, 00:22:36.672 "supported_io_types": { 00:22:36.672 "read": true, 00:22:36.672 "write": true, 00:22:36.672 "unmap": true, 00:22:36.672 "flush": false, 00:22:36.672 "reset": true, 00:22:36.672 "nvme_admin": false, 00:22:36.672 "nvme_io": false, 00:22:36.672 "nvme_io_md": false, 00:22:36.672 "write_zeroes": true, 00:22:36.672 "zcopy": false, 00:22:36.672 "get_zone_info": false, 00:22:36.672 "zone_management": false, 00:22:36.672 "zone_append": false, 00:22:36.672 "compare": false, 00:22:36.672 "compare_and_write": false, 00:22:36.672 "abort": false, 00:22:36.672 "seek_hole": true, 00:22:36.672 "seek_data": true, 00:22:36.672 "copy": false, 00:22:36.672 "nvme_iov_md": false 00:22:36.672 }, 00:22:36.672 "driver_specific": { 00:22:36.672 "lvol": { 00:22:36.672 "lvol_store_uuid": "cfbf7c05-ab70-41fc-9b4c-8633e89a3b56", 00:22:36.672 "base_bdev": "nvme0n1", 00:22:36.672 "thin_provision": true, 00:22:36.672 "num_allocated_clusters": 0, 00:22:36.672 "snapshot": false, 00:22:36.672 "clone": false, 00:22:36.672 "esnap_clone": false 00:22:36.672 } 00:22:36.672 } 00:22:36.672 } 00:22:36.672 ]' 00:22:36.672 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:36.672 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:22:36.672 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:36.672 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:36.672 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:36.672 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:22:36.672 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:22:36.672 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:22:36.672 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:36.931 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:36.931 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:36.931 01:35:32 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:36.931 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:36.931 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:36.931 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:22:36.931 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:22:36.931 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:37.189 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:37.189 { 00:22:37.189 "name": "84af244f-3f0e-4b66-9fff-34d11b73376c", 00:22:37.189 "aliases": [ 00:22:37.189 "lvs/nvme0n1p0" 00:22:37.189 ], 00:22:37.189 "product_name": "Logical Volume", 00:22:37.189 "block_size": 4096, 00:22:37.189 "num_blocks": 26476544, 00:22:37.189 "uuid": "84af244f-3f0e-4b66-9fff-34d11b73376c", 00:22:37.189 "assigned_rate_limits": { 00:22:37.189 "rw_ios_per_sec": 0, 00:22:37.189 "rw_mbytes_per_sec": 0, 00:22:37.189 "r_mbytes_per_sec": 0, 00:22:37.189 "w_mbytes_per_sec": 0 00:22:37.189 }, 00:22:37.189 "claimed": false, 00:22:37.189 "zoned": false, 00:22:37.189 "supported_io_types": { 00:22:37.189 "read": true, 00:22:37.189 "write": true, 00:22:37.189 "unmap": true, 00:22:37.189 "flush": false, 00:22:37.189 "reset": true, 00:22:37.189 "nvme_admin": false, 00:22:37.189 "nvme_io": false, 00:22:37.189 "nvme_io_md": false, 00:22:37.189 "write_zeroes": true, 00:22:37.189 "zcopy": false, 00:22:37.189 "get_zone_info": false, 00:22:37.189 "zone_management": false, 00:22:37.189 "zone_append": false, 00:22:37.189 "compare": false, 00:22:37.189 "compare_and_write": false, 00:22:37.189 "abort": false, 00:22:37.189 "seek_hole": true, 00:22:37.189 "seek_data": true, 00:22:37.189 "copy": false, 00:22:37.189 "nvme_iov_md": false 00:22:37.189 }, 00:22:37.189 "driver_specific": { 00:22:37.189 "lvol": { 00:22:37.189 "lvol_store_uuid": "cfbf7c05-ab70-41fc-9b4c-8633e89a3b56", 00:22:37.189 "base_bdev": "nvme0n1", 00:22:37.189 "thin_provision": true, 00:22:37.189 "num_allocated_clusters": 0, 00:22:37.189 "snapshot": false, 00:22:37.189 "clone": false, 00:22:37.189 "esnap_clone": false 00:22:37.189 } 00:22:37.189 } 00:22:37.189 } 00:22:37.189 ]' 00:22:37.189 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:37.189 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:22:37.189 01:35:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:37.189 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:37.189 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:37.189 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:22:37.189 01:35:33 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:22:37.189 01:35:33 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:37.448 01:35:33 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:37.448 01:35:33 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:37.448 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:37.448 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:37.448 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:22:37.448 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:22:37.448 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84af244f-3f0e-4b66-9fff-34d11b73376c 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:37.706 { 00:22:37.706 "name": "84af244f-3f0e-4b66-9fff-34d11b73376c", 00:22:37.706 "aliases": [ 00:22:37.706 "lvs/nvme0n1p0" 00:22:37.706 ], 00:22:37.706 "product_name": "Logical Volume", 00:22:37.706 "block_size": 4096, 00:22:37.706 "num_blocks": 26476544, 00:22:37.706 "uuid": "84af244f-3f0e-4b66-9fff-34d11b73376c", 00:22:37.706 "assigned_rate_limits": { 00:22:37.706 "rw_ios_per_sec": 0, 00:22:37.706 "rw_mbytes_per_sec": 0, 00:22:37.706 "r_mbytes_per_sec": 0, 00:22:37.706 "w_mbytes_per_sec": 0 00:22:37.706 }, 00:22:37.706 "claimed": false, 00:22:37.706 "zoned": false, 00:22:37.706 "supported_io_types": { 00:22:37.706 "read": true, 00:22:37.706 "write": true, 00:22:37.706 "unmap": true, 00:22:37.706 "flush": false, 00:22:37.706 "reset": true, 00:22:37.706 "nvme_admin": false, 00:22:37.706 "nvme_io": false, 00:22:37.706 "nvme_io_md": false, 00:22:37.706 "write_zeroes": true, 00:22:37.706 "zcopy": false, 00:22:37.706 "get_zone_info": false, 00:22:37.706 "zone_management": false, 00:22:37.706 "zone_append": false, 00:22:37.706 "compare": false, 00:22:37.706 "compare_and_write": false, 00:22:37.706 "abort": false, 00:22:37.706 "seek_hole": true, 00:22:37.706 "seek_data": true, 00:22:37.706 "copy": false, 00:22:37.706 "nvme_iov_md": false 00:22:37.706 }, 00:22:37.706 "driver_specific": { 00:22:37.706 "lvol": { 00:22:37.706 "lvol_store_uuid": "cfbf7c05-ab70-41fc-9b4c-8633e89a3b56", 00:22:37.706 "base_bdev": "nvme0n1", 00:22:37.706 "thin_provision": true, 00:22:37.706 "num_allocated_clusters": 0, 00:22:37.706 "snapshot": false, 00:22:37.706 "clone": false, 00:22:37.706 "esnap_clone": false 00:22:37.706 } 00:22:37.706 } 00:22:37.706 } 00:22:37.706 ]' 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 84af244f-3f0e-4b66-9fff-34d11b73376c --l2p_dram_limit 10' 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:22:37.706 01:35:33 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 84af244f-3f0e-4b66-9fff-34d11b73376c --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:22:37.966 [2024-09-28 01:35:33.674637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.674681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:37.966 [2024-09-28 01:35:33.674694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:37.966 [2024-09-28 01:35:33.674701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.674746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.674754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:37.966 [2024-09-28 01:35:33.674762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:37.966 [2024-09-28 01:35:33.674771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.674793] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:37.966 [2024-09-28 01:35:33.675391] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:37.966 [2024-09-28 01:35:33.675409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.675416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:37.966 [2024-09-28 01:35:33.675424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:22:37.966 [2024-09-28 01:35:33.675431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.675457] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a25e5fd9-192f-4882-b55b-cf92b921114e 00:22:37.966 [2024-09-28 01:35:33.676405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.676429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:37.966 [2024-09-28 01:35:33.676436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:37.966 [2024-09-28 01:35:33.676444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.681100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.681130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:37.966 [2024-09-28 01:35:33.681138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.598 ms 00:22:37.966 [2024-09-28 01:35:33.681144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.681225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.681234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:37.966 [2024-09-28 01:35:33.681254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:37.966 [2024-09-28 01:35:33.681266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.681297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.681307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:37.966 [2024-09-28 01:35:33.681313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:37.966 [2024-09-28 01:35:33.681319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.681336] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:37.966 [2024-09-28 01:35:33.684167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.684197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:37.966 [2024-09-28 01:35:33.684207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:22:37.966 [2024-09-28 01:35:33.684213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.684243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.684249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:37.966 [2024-09-28 01:35:33.684258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:37.966 [2024-09-28 01:35:33.684265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.684290] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:37.966 [2024-09-28 01:35:33.684396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:37.966 [2024-09-28 01:35:33.684408] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:37.966 [2024-09-28 01:35:33.684417] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:37.966 [2024-09-28 01:35:33.684428] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684435] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684443] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:37.966 [2024-09-28 01:35:33.684449] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:37.966 [2024-09-28 01:35:33.684456] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:37.966 [2024-09-28 01:35:33.684461] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:37.966 [2024-09-28 01:35:33.684469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.684479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:37.966 [2024-09-28 01:35:33.684486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:22:37.966 [2024-09-28 01:35:33.684492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.684557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.966 [2024-09-28 01:35:33.684566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:37.966 [2024-09-28 01:35:33.684573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:37.966 [2024-09-28 01:35:33.684578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.966 [2024-09-28 01:35:33.684652] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:37.966 [2024-09-28 01:35:33.684659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:37.966 [2024-09-28 01:35:33.684666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:37.966 [2024-09-28 01:35:33.684684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:37.966 [2024-09-28 01:35:33.684702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:37.966 [2024-09-28 01:35:33.684713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:37.966 [2024-09-28 01:35:33.684719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:37.966 [2024-09-28 01:35:33.684725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:37.966 [2024-09-28 01:35:33.684730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:37.966 [2024-09-28 01:35:33.684737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:37.966 [2024-09-28 01:35:33.684742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:37.966 [2024-09-28 01:35:33.684754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:37.966 [2024-09-28 01:35:33.684772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:37.966 [2024-09-28 01:35:33.684790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:37.966 [2024-09-28 01:35:33.684813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:37.966 [2024-09-28 01:35:33.684839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:37.966 [2024-09-28 01:35:33.684858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:37.966 [2024-09-28 01:35:33.684870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:37.966 [2024-09-28 01:35:33.684875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:37.966 [2024-09-28 01:35:33.684881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:37.966 [2024-09-28 01:35:33.684886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:37.966 [2024-09-28 01:35:33.684893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:37.966 [2024-09-28 01:35:33.684898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:37.966 [2024-09-28 01:35:33.684909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:37.966 [2024-09-28 01:35:33.684915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684920] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:37.966 [2024-09-28 01:35:33.684928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:37.966 [2024-09-28 01:35:33.684935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.966 [2024-09-28 01:35:33.684948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:37.966 [2024-09-28 01:35:33.684957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:37.966 [2024-09-28 01:35:33.684962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:37.966 [2024-09-28 01:35:33.684969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:37.966 [2024-09-28 01:35:33.684974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:37.966 [2024-09-28 01:35:33.684981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:37.966 [2024-09-28 01:35:33.684989] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:37.966 [2024-09-28 01:35:33.684997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.966 [2024-09-28 01:35:33.685003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:37.966 [2024-09-28 01:35:33.685010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:37.966 [2024-09-28 01:35:33.685017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:37.966 [2024-09-28 01:35:33.685024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:37.966 [2024-09-28 01:35:33.685029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:37.966 [2024-09-28 01:35:33.685036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:37.966 [2024-09-28 01:35:33.685042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:37.966 [2024-09-28 01:35:33.685048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:37.966 [2024-09-28 01:35:33.685054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:37.966 [2024-09-28 01:35:33.685062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:37.966 [2024-09-28 01:35:33.685068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:37.966 [2024-09-28 01:35:33.685074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:37.966 [2024-09-28 01:35:33.685080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:37.966 [2024-09-28 01:35:33.685088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:37.966 [2024-09-28 01:35:33.685094] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:37.967 [2024-09-28 01:35:33.685101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.967 [2024-09-28 01:35:33.685108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:37.967 [2024-09-28 01:35:33.685115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:37.967 [2024-09-28 01:35:33.685121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:37.967 [2024-09-28 01:35:33.685128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:37.967 [2024-09-28 01:35:33.685134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.967 [2024-09-28 01:35:33.685141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:37.967 [2024-09-28 01:35:33.685147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:22:37.967 [2024-09-28 01:35:33.685153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.967 [2024-09-28 01:35:33.685355] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:37.967 [2024-09-28 01:35:33.685399] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:39.887 [2024-09-28 01:35:35.731448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.887 [2024-09-28 01:35:35.731681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:39.887 [2024-09-28 01:35:35.731749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2046.086 ms 00:22:39.887 [2024-09-28 01:35:35.731776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.887 [2024-09-28 01:35:35.757020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.887 [2024-09-28 01:35:35.757235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:39.887 [2024-09-28 01:35:35.757361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.025 ms 00:22:39.887 [2024-09-28 01:35:35.757404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.887 [2024-09-28 01:35:35.757647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.887 [2024-09-28 01:35:35.757768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:39.887 [2024-09-28 01:35:35.757847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:39.887 [2024-09-28 01:35:35.757936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.176 [2024-09-28 01:35:35.806594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.176 [2024-09-28 01:35:35.806845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:40.176 [2024-09-28 01:35:35.806884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.546 ms 00:22:40.176 [2024-09-28 01:35:35.806909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.176 [2024-09-28 01:35:35.806976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.176 [2024-09-28 01:35:35.806999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:40.176 [2024-09-28 01:35:35.807017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:40.176 [2024-09-28 01:35:35.807046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.176 [2024-09-28 01:35:35.807586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.176 [2024-09-28 01:35:35.807623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:40.176 [2024-09-28 01:35:35.807641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:22:40.176 [2024-09-28 01:35:35.807664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.176 [2024-09-28 01:35:35.807875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.176 [2024-09-28 01:35:35.807896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:40.176 [2024-09-28 01:35:35.807912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:22:40.176 [2024-09-28 01:35:35.807933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.176 [2024-09-28 01:35:35.822018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.176 [2024-09-28 01:35:35.822056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:40.177 [2024-09-28 01:35:35.822067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.052 ms 00:22:40.177 [2024-09-28 01:35:35.822076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.177 [2024-09-28 01:35:35.833520] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:40.177 [2024-09-28 01:35:35.836141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.177 [2024-09-28 01:35:35.836171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:40.177 [2024-09-28 01:35:35.836187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.984 ms 00:22:40.177 [2024-09-28 01:35:35.836209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.177 [2024-09-28 01:35:35.942564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.177 [2024-09-28 01:35:35.942629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:40.177 [2024-09-28 01:35:35.942648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.313 ms 00:22:40.177 [2024-09-28 01:35:35.942657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.177 [2024-09-28 01:35:35.942821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.177 [2024-09-28 01:35:35.942831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:40.177 [2024-09-28 01:35:35.942843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:22:40.177 [2024-09-28 01:35:35.942851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.177 [2024-09-28 01:35:35.965990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.177 [2024-09-28 01:35:35.966028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:40.177 [2024-09-28 01:35:35.966041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.093 ms 00:22:40.177 [2024-09-28 01:35:35.966049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.177 [2024-09-28 01:35:35.988713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.177 [2024-09-28 01:35:35.988859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:40.177 [2024-09-28 01:35:35.988881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.638 ms 00:22:40.177 [2024-09-28 01:35:35.988888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.177 [2024-09-28 01:35:35.989463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.177 [2024-09-28 01:35:35.989480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:40.177 [2024-09-28 01:35:35.989490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:22:40.177 [2024-09-28 01:35:35.989498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.177 [2024-09-28 01:35:36.056245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.177 [2024-09-28 01:35:36.056283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:40.177 [2024-09-28 01:35:36.056299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.710 ms 00:22:40.177 [2024-09-28 01:35:36.056309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.177 [2024-09-28 01:35:36.080475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.177 [2024-09-28 01:35:36.080508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:40.177 [2024-09-28 01:35:36.080522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.094 ms 00:22:40.177 [2024-09-28 01:35:36.080530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.177 [2024-09-28 01:35:36.103278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.455 [2024-09-28 01:35:36.103398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:40.455 [2024-09-28 01:35:36.103418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.710 ms 00:22:40.455 [2024-09-28 01:35:36.103426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.456 [2024-09-28 01:35:36.126535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.456 [2024-09-28 01:35:36.126659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:40.456 [2024-09-28 01:35:36.126679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.075 ms 00:22:40.456 [2024-09-28 01:35:36.126686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.456 [2024-09-28 01:35:36.126722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.456 [2024-09-28 01:35:36.126732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:40.456 [2024-09-28 01:35:36.126746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:40.456 [2024-09-28 01:35:36.126755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.456 [2024-09-28 01:35:36.126830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.456 [2024-09-28 01:35:36.126840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:40.456 [2024-09-28 01:35:36.126849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:40.456 [2024-09-28 01:35:36.126857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.456 [2024-09-28 01:35:36.127696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2452.651 ms, result 0 00:22:40.456 { 00:22:40.456 "name": "ftl0", 00:22:40.456 "uuid": "a25e5fd9-192f-4882-b55b-cf92b921114e" 00:22:40.456 } 00:22:40.456 01:35:36 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:40.456 01:35:36 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:40.456 01:35:36 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:22:40.456 01:35:36 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:40.738 [2024-09-28 01:35:36.543410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.738 [2024-09-28 01:35:36.543462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:40.738 [2024-09-28 01:35:36.543476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:40.738 [2024-09-28 01:35:36.543486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.738 [2024-09-28 01:35:36.543509] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:40.738 [2024-09-28 01:35:36.546123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.738 [2024-09-28 01:35:36.546297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:40.738 [2024-09-28 01:35:36.546326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:22:40.738 [2024-09-28 01:35:36.546334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.738 [2024-09-28 01:35:36.546595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.738 [2024-09-28 01:35:36.546604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:40.738 [2024-09-28 01:35:36.546614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:22:40.738 [2024-09-28 01:35:36.546622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.738 [2024-09-28 01:35:36.549858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.738 [2024-09-28 01:35:36.549950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:40.738 [2024-09-28 01:35:36.549966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:22:40.738 [2024-09-28 01:35:36.549975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.738 [2024-09-28 01:35:36.556297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.738 [2024-09-28 01:35:36.556386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:40.738 [2024-09-28 01:35:36.556458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.298 ms 00:22:40.738 [2024-09-28 01:35:36.556481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.738 [2024-09-28 01:35:36.580101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.738 [2024-09-28 01:35:36.580233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:40.738 [2024-09-28 01:35:36.580293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.537 ms 00:22:40.738 [2024-09-28 01:35:36.580316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.738 [2024-09-28 01:35:36.594684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.738 [2024-09-28 01:35:36.594793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:40.738 [2024-09-28 01:35:36.594847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.271 ms 00:22:40.738 [2024-09-28 01:35:36.594870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.738 [2024-09-28 01:35:36.595025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.738 [2024-09-28 01:35:36.595103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:40.738 [2024-09-28 01:35:36.595125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:22:40.739 [2024-09-28 01:35:36.595176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.739 [2024-09-28 01:35:36.617708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.739 [2024-09-28 01:35:36.617819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:40.739 [2024-09-28 01:35:36.617870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.481 ms 00:22:40.739 [2024-09-28 01:35:36.617892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.739 [2024-09-28 01:35:36.640575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.739 [2024-09-28 01:35:36.640677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:40.998 [2024-09-28 01:35:36.640727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.618 ms 00:22:40.998 [2024-09-28 01:35:36.640749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.998 [2024-09-28 01:35:36.663104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.998 [2024-09-28 01:35:36.663211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:40.998 [2024-09-28 01:35:36.663262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.306 ms 00:22:40.998 [2024-09-28 01:35:36.663284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.998 [2024-09-28 01:35:36.685494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.998 [2024-09-28 01:35:36.685604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:40.998 [2024-09-28 01:35:36.685661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.130 ms 00:22:40.998 [2024-09-28 01:35:36.685683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.998 [2024-09-28 01:35:36.685737] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:40.998 [2024-09-28 01:35:36.685767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.685800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.685871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.685906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.685936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.685966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.686970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.687000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.687030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.687088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.687132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.687161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:40.998 [2024-09-28 01:35:36.687228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.687975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.688993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:40.999 [2024-09-28 01:35:36.689091] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:40.999 [2024-09-28 01:35:36.689103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a25e5fd9-192f-4882-b55b-cf92b921114e 00:22:40.999 [2024-09-28 01:35:36.689111] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:40.999 [2024-09-28 01:35:36.689122] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:40.999 [2024-09-28 01:35:36.689130] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:40.999 [2024-09-28 01:35:36.689139] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:40.999 [2024-09-28 01:35:36.689146] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:40.999 [2024-09-28 01:35:36.689154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:40.999 [2024-09-28 01:35:36.689165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:40.999 [2024-09-28 01:35:36.689173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:40.999 [2024-09-28 01:35:36.689179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:40.999 [2024-09-28 01:35:36.689189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.999 [2024-09-28 01:35:36.689207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:40.999 [2024-09-28 01:35:36.689217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.454 ms 00:22:40.999 [2024-09-28 01:35:36.689224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.999 [2024-09-28 01:35:36.701461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.999 [2024-09-28 01:35:36.701556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:40.999 [2024-09-28 01:35:36.701607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.199 ms 00:22:40.999 [2024-09-28 01:35:36.701630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.999 [2024-09-28 01:35:36.702033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.999 [2024-09-28 01:35:36.702111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:40.999 [2024-09-28 01:35:36.702162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:22:40.999 [2024-09-28 01:35:36.702220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.999 [2024-09-28 01:35:36.738975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.739085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:41.000 [2024-09-28 01:35:36.739137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.739162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.739242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.739264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:41.000 [2024-09-28 01:35:36.739316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.739338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.739417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.739442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:41.000 [2024-09-28 01:35:36.739555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.739574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.739643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.739667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:41.000 [2024-09-28 01:35:36.739688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.739707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.816218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.816349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:41.000 [2024-09-28 01:35:36.816400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.816422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.879093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.879274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:41.000 [2024-09-28 01:35:36.879327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.879349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.879446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.879471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:41.000 [2024-09-28 01:35:36.879493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.879511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.879571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.879642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:41.000 [2024-09-28 01:35:36.879668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.879687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.879790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.879814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:41.000 [2024-09-28 01:35:36.879835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.879854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.879934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.879987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:41.000 [2024-09-28 01:35:36.880037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.880047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.880085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.880094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:41.000 [2024-09-28 01:35:36.880104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.880112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.880156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.000 [2024-09-28 01:35:36.880167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:41.000 [2024-09-28 01:35:36.880177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.000 [2024-09-28 01:35:36.880184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.000 [2024-09-28 01:35:36.880319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.881 ms, result 0 00:22:41.000 true 00:22:41.000 01:35:36 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 78455 00:22:41.000 01:35:36 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78455 ']' 00:22:41.000 01:35:36 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78455 00:22:41.000 01:35:36 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:22:41.000 01:35:36 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:41.000 01:35:36 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78455 00:22:41.261 killing process with pid 78455 00:22:41.261 01:35:36 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:41.261 01:35:36 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:41.261 01:35:36 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78455' 00:22:41.261 01:35:36 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 78455 00:22:41.261 01:35:36 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 78455 00:22:46.535 01:35:42 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:50.720 262144+0 records in 00:22:50.720 262144+0 records out 00:22:50.720 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.77216 s, 285 MB/s 00:22:50.720 01:35:46 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:52.621 01:35:48 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:52.621 [2024-09-28 01:35:48.208900] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:52.621 [2024-09-28 01:35:48.209017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78661 ] 00:22:52.621 [2024-09-28 01:35:48.358275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.621 [2024-09-28 01:35:48.535503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.880 [2024-09-28 01:35:48.786043] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:52.880 [2024-09-28 01:35:48.786242] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:53.139 [2024-09-28 01:35:48.939468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.939515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:53.139 [2024-09-28 01:35:48.939528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:53.139 [2024-09-28 01:35:48.939540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.939582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.939591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.139 [2024-09-28 01:35:48.939599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:53.139 [2024-09-28 01:35:48.939607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.939623] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:53.139 [2024-09-28 01:35:48.940276] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:53.139 [2024-09-28 01:35:48.940292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.940300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.139 [2024-09-28 01:35:48.940308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:22:53.139 [2024-09-28 01:35:48.940315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.941378] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:53.139 [2024-09-28 01:35:48.953616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.953648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:53.139 [2024-09-28 01:35:48.953660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.239 ms 00:22:53.139 [2024-09-28 01:35:48.953667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.953716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.953726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:53.139 [2024-09-28 01:35:48.953734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:53.139 [2024-09-28 01:35:48.953740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.958410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.958444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.139 [2024-09-28 01:35:48.958454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.607 ms 00:22:53.139 [2024-09-28 01:35:48.958461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.958527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.958536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.139 [2024-09-28 01:35:48.958544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:53.139 [2024-09-28 01:35:48.958551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.958593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.958602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:53.139 [2024-09-28 01:35:48.958610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:53.139 [2024-09-28 01:35:48.958617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.958637] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:53.139 [2024-09-28 01:35:48.961976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.962003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.139 [2024-09-28 01:35:48.962011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.344 ms 00:22:53.139 [2024-09-28 01:35:48.962019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.962046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.962054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:53.139 [2024-09-28 01:35:48.962062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:53.139 [2024-09-28 01:35:48.962069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.962090] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:53.139 [2024-09-28 01:35:48.962107] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:53.139 [2024-09-28 01:35:48.962140] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:53.139 [2024-09-28 01:35:48.962154] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:53.139 [2024-09-28 01:35:48.962270] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:53.139 [2024-09-28 01:35:48.962281] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:53.139 [2024-09-28 01:35:48.962291] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:53.139 [2024-09-28 01:35:48.962303] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:53.139 [2024-09-28 01:35:48.962312] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:53.139 [2024-09-28 01:35:48.962320] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:53.139 [2024-09-28 01:35:48.962327] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:53.139 [2024-09-28 01:35:48.962334] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:53.139 [2024-09-28 01:35:48.962341] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:53.139 [2024-09-28 01:35:48.962348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.962355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:53.139 [2024-09-28 01:35:48.962363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:22:53.139 [2024-09-28 01:35:48.962369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.962451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.139 [2024-09-28 01:35:48.962461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:53.139 [2024-09-28 01:35:48.962469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:53.139 [2024-09-28 01:35:48.962476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.139 [2024-09-28 01:35:48.962587] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:53.139 [2024-09-28 01:35:48.962598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:53.139 [2024-09-28 01:35:48.962606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.139 [2024-09-28 01:35:48.962614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.139 [2024-09-28 01:35:48.962621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:53.139 [2024-09-28 01:35:48.962628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:53.139 [2024-09-28 01:35:48.962634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:53.140 [2024-09-28 01:35:48.962641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:53.140 [2024-09-28 01:35:48.962647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.140 [2024-09-28 01:35:48.962661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:53.140 [2024-09-28 01:35:48.962667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:53.140 [2024-09-28 01:35:48.962673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:53.140 [2024-09-28 01:35:48.962685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:53.140 [2024-09-28 01:35:48.962691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:53.140 [2024-09-28 01:35:48.962698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:53.140 [2024-09-28 01:35:48.962711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:53.140 [2024-09-28 01:35:48.962717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:53.140 [2024-09-28 01:35:48.962730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.140 [2024-09-28 01:35:48.962743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:53.140 [2024-09-28 01:35:48.962749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.140 [2024-09-28 01:35:48.962761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:53.140 [2024-09-28 01:35:48.962768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.140 [2024-09-28 01:35:48.962780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:53.140 [2024-09-28 01:35:48.962787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:53.140 [2024-09-28 01:35:48.962799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:53.140 [2024-09-28 01:35:48.962805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.140 [2024-09-28 01:35:48.962817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:53.140 [2024-09-28 01:35:48.962823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:53.140 [2024-09-28 01:35:48.962830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:53.140 [2024-09-28 01:35:48.962836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:53.140 [2024-09-28 01:35:48.962843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:53.140 [2024-09-28 01:35:48.962849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:53.140 [2024-09-28 01:35:48.962861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:53.140 [2024-09-28 01:35:48.962867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962873] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:53.140 [2024-09-28 01:35:48.962881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:53.140 [2024-09-28 01:35:48.962890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:53.140 [2024-09-28 01:35:48.962897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:53.140 [2024-09-28 01:35:48.962905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:53.140 [2024-09-28 01:35:48.962912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:53.140 [2024-09-28 01:35:48.962920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:53.140 [2024-09-28 01:35:48.962927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:53.140 [2024-09-28 01:35:48.962933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:53.140 [2024-09-28 01:35:48.962940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:53.140 [2024-09-28 01:35:48.962947] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:53.140 [2024-09-28 01:35:48.962956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.140 [2024-09-28 01:35:48.962963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:53.140 [2024-09-28 01:35:48.962970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:53.140 [2024-09-28 01:35:48.962977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:53.140 [2024-09-28 01:35:48.962984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:53.140 [2024-09-28 01:35:48.962991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:53.140 [2024-09-28 01:35:48.962998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:53.140 [2024-09-28 01:35:48.963005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:53.140 [2024-09-28 01:35:48.963011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:53.140 [2024-09-28 01:35:48.963018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:53.140 [2024-09-28 01:35:48.963025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:53.140 [2024-09-28 01:35:48.963031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:53.140 [2024-09-28 01:35:48.963038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:53.140 [2024-09-28 01:35:48.963045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:53.140 [2024-09-28 01:35:48.963052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:53.140 [2024-09-28 01:35:48.963059] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:53.140 [2024-09-28 01:35:48.963067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:53.140 [2024-09-28 01:35:48.963074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:53.140 [2024-09-28 01:35:48.963081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:53.140 [2024-09-28 01:35:48.963088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:53.140 [2024-09-28 01:35:48.963095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:53.140 [2024-09-28 01:35:48.963102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.140 [2024-09-28 01:35:48.963110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:53.140 [2024-09-28 01:35:48.963117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:22:53.140 [2024-09-28 01:35:48.963124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.140 [2024-09-28 01:35:48.999359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.140 [2024-09-28 01:35:48.999408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.140 [2024-09-28 01:35:48.999422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.189 ms 00:22:53.140 [2024-09-28 01:35:48.999431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.140 [2024-09-28 01:35:48.999539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.140 [2024-09-28 01:35:48.999551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:53.140 [2024-09-28 01:35:48.999561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:53.140 [2024-09-28 01:35:48.999570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.140 [2024-09-28 01:35:49.029906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.140 [2024-09-28 01:35:49.030037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.140 [2024-09-28 01:35:49.030057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.271 ms 00:22:53.140 [2024-09-28 01:35:49.030065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.140 [2024-09-28 01:35:49.030098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.140 [2024-09-28 01:35:49.030106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.140 [2024-09-28 01:35:49.030114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:53.140 [2024-09-28 01:35:49.030122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.140 [2024-09-28 01:35:49.030464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.140 [2024-09-28 01:35:49.030486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.140 [2024-09-28 01:35:49.030495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:22:53.140 [2024-09-28 01:35:49.030506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.140 [2024-09-28 01:35:49.030626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.140 [2024-09-28 01:35:49.030638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.140 [2024-09-28 01:35:49.030646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:53.140 [2024-09-28 01:35:49.030654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.140 [2024-09-28 01:35:49.042906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.140 [2024-09-28 01:35:49.042936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.140 [2024-09-28 01:35:49.042945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.234 ms 00:22:53.140 [2024-09-28 01:35:49.042953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.141 [2024-09-28 01:35:49.055093] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:53.141 [2024-09-28 01:35:49.055124] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:53.141 [2024-09-28 01:35:49.055135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.141 [2024-09-28 01:35:49.055143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:53.141 [2024-09-28 01:35:49.055151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.075 ms 00:22:53.141 [2024-09-28 01:35:49.055158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.399 [2024-09-28 01:35:49.079891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.399 [2024-09-28 01:35:49.079944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:53.399 [2024-09-28 01:35:49.079956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.683 ms 00:22:53.399 [2024-09-28 01:35:49.079963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.399 [2024-09-28 01:35:49.091436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.399 [2024-09-28 01:35:49.091559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:53.399 [2024-09-28 01:35:49.091575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.429 ms 00:22:53.399 [2024-09-28 01:35:49.091583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.399 [2024-09-28 01:35:49.102906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.399 [2024-09-28 01:35:49.103013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:53.399 [2024-09-28 01:35:49.103027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.295 ms 00:22:53.399 [2024-09-28 01:35:49.103034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.399 [2024-09-28 01:35:49.103639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.399 [2024-09-28 01:35:49.103658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:53.399 [2024-09-28 01:35:49.103667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:22:53.399 [2024-09-28 01:35:49.103674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.399 [2024-09-28 01:35:49.157549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.399 [2024-09-28 01:35:49.157730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:53.399 [2024-09-28 01:35:49.157748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.858 ms 00:22:53.399 [2024-09-28 01:35:49.157756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.399 [2024-09-28 01:35:49.168046] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:53.399 [2024-09-28 01:35:49.170251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.399 [2024-09-28 01:35:49.170279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:53.399 [2024-09-28 01:35:49.170290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.461 ms 00:22:53.400 [2024-09-28 01:35:49.170299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.400 [2024-09-28 01:35:49.170387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.400 [2024-09-28 01:35:49.170399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:53.400 [2024-09-28 01:35:49.170409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:53.400 [2024-09-28 01:35:49.170418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.400 [2024-09-28 01:35:49.170480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.400 [2024-09-28 01:35:49.170490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:53.400 [2024-09-28 01:35:49.170498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:53.400 [2024-09-28 01:35:49.170505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.400 [2024-09-28 01:35:49.170523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.400 [2024-09-28 01:35:49.170534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:53.400 [2024-09-28 01:35:49.170541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:53.400 [2024-09-28 01:35:49.170548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.400 [2024-09-28 01:35:49.170576] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:53.400 [2024-09-28 01:35:49.170585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.400 [2024-09-28 01:35:49.170592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:53.400 [2024-09-28 01:35:49.170600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:53.400 [2024-09-28 01:35:49.170609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.400 [2024-09-28 01:35:49.193779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.400 [2024-09-28 01:35:49.193811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:53.400 [2024-09-28 01:35:49.193822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.152 ms 00:22:53.400 [2024-09-28 01:35:49.193829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.400 [2024-09-28 01:35:49.193898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.400 [2024-09-28 01:35:49.193908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:53.400 [2024-09-28 01:35:49.193916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:53.400 [2024-09-28 01:35:49.193923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.400 [2024-09-28 01:35:49.195096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 255.223 ms, result 0 00:23:33.382  Copying: 47/1024 [MB] (47 MBps) Copying: 96/1024 [MB] (49 MBps) Copying: 143/1024 [MB] (46 MBps) Copying: 169/1024 [MB] (26 MBps) Copying: 191/1024 [MB] (21 MBps) Copying: 216/1024 [MB] (25 MBps) Copying: 240/1024 [MB] (23 MBps) Copying: 266/1024 [MB] (25 MBps) Copying: 291/1024 [MB] (25 MBps) Copying: 306/1024 [MB] (14 MBps) Copying: 320/1024 [MB] (14 MBps) Copying: 333/1024 [MB] (13 MBps) Copying: 344/1024 [MB] (10 MBps) Copying: 356/1024 [MB] (12 MBps) Copying: 391/1024 [MB] (35 MBps) Copying: 407/1024 [MB] (16 MBps) Copying: 419/1024 [MB] (12 MBps) Copying: 436/1024 [MB] (16 MBps) Copying: 446/1024 [MB] (10 MBps) Copying: 467/1024 [MB] (21 MBps) Copying: 481/1024 [MB] (14 MBps) Copying: 532/1024 [MB] (50 MBps) Copying: 573/1024 [MB] (40 MBps) Copying: 592/1024 [MB] (18 MBps) Copying: 604/1024 [MB] (12 MBps) Copying: 616/1024 [MB] (11 MBps) Copying: 635/1024 [MB] (19 MBps) Copying: 662/1024 [MB] (26 MBps) Copying: 683/1024 [MB] (20 MBps) Copying: 695/1024 [MB] (11 MBps) Copying: 713/1024 [MB] (18 MBps) Copying: 755/1024 [MB] (41 MBps) Copying: 802/1024 [MB] (47 MBps) Copying: 849/1024 [MB] (47 MBps) Copying: 876/1024 [MB] (27 MBps) Copying: 898/1024 [MB] (21 MBps) Copying: 920/1024 [MB] (21 MBps) Copying: 940/1024 [MB] (20 MBps) Copying: 987/1024 [MB] (46 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-09-28 01:36:28.990590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.382 [2024-09-28 01:36:28.990636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:33.382 [2024-09-28 01:36:28.990649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:33.382 [2024-09-28 01:36:28.990656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.382 [2024-09-28 01:36:28.990680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:33.382 [2024-09-28 01:36:28.993274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.382 [2024-09-28 01:36:28.993304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:33.382 [2024-09-28 01:36:28.993315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:23:33.382 [2024-09-28 01:36:28.993324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.382 [2024-09-28 01:36:28.994659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.382 [2024-09-28 01:36:28.994688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:33.382 [2024-09-28 01:36:28.994697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:23:33.382 [2024-09-28 01:36:28.994704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.382 [2024-09-28 01:36:28.994732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.382 [2024-09-28 01:36:28.994740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:23:33.382 [2024-09-28 01:36:28.994748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:33.382 [2024-09-28 01:36:28.994755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.382 [2024-09-28 01:36:28.994795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.382 [2024-09-28 01:36:28.994803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:23:33.382 [2024-09-28 01:36:28.994811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:33.382 [2024-09-28 01:36:28.994818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.382 [2024-09-28 01:36:28.994830] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:33.382 [2024-09-28 01:36:28.994842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.994997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:33.382 [2024-09-28 01:36:28.995352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:33.383 [2024-09-28 01:36:28.995605] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:33.383 [2024-09-28 01:36:28.995612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a25e5fd9-192f-4882-b55b-cf92b921114e 00:23:33.383 [2024-09-28 01:36:28.995619] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:33.383 [2024-09-28 01:36:28.995626] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:23:33.383 [2024-09-28 01:36:28.995633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:33.383 [2024-09-28 01:36:28.995639] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:33.383 [2024-09-28 01:36:28.995646] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:33.383 [2024-09-28 01:36:28.995653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:33.383 [2024-09-28 01:36:28.995660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:33.383 [2024-09-28 01:36:28.995666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:33.383 [2024-09-28 01:36:28.995672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:33.383 [2024-09-28 01:36:28.995679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.383 [2024-09-28 01:36:28.995686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:33.383 [2024-09-28 01:36:28.995693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:23:33.383 [2024-09-28 01:36:28.995701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.007791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.383 [2024-09-28 01:36:29.007820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:33.383 [2024-09-28 01:36:29.007830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.076 ms 00:23:33.383 [2024-09-28 01:36:29.007837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.008171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.383 [2024-09-28 01:36:29.008182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:33.383 [2024-09-28 01:36:29.008210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:23:33.383 [2024-09-28 01:36:29.008217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.035949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.035980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.383 [2024-09-28 01:36:29.035989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.035996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.036045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.036053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.383 [2024-09-28 01:36:29.036065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.036072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.036111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.036120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.383 [2024-09-28 01:36:29.036129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.036139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.036152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.036160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.383 [2024-09-28 01:36:29.036167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.036176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.110931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.110972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.383 [2024-09-28 01:36:29.110983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.110990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.172077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.172284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.383 [2024-09-28 01:36:29.172301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.172314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.172385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.172394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.383 [2024-09-28 01:36:29.172402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.172409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.172441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.172450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.383 [2024-09-28 01:36:29.172458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.172465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.172540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.172550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.383 [2024-09-28 01:36:29.172557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.172564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.172586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.172594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.383 [2024-09-28 01:36:29.172603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.172610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.172648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.172657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.383 [2024-09-28 01:36:29.172665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.172672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.172710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.383 [2024-09-28 01:36:29.172719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.383 [2024-09-28 01:36:29.172727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.383 [2024-09-28 01:36:29.172734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.383 [2024-09-28 01:36:29.172845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 182.223 ms, result 0 00:23:34.318 00:23:34.318 00:23:34.318 01:36:30 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:34.576 [2024-09-28 01:36:30.310513] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:34.576 [2024-09-28 01:36:30.310633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79091 ] 00:23:34.576 [2024-09-28 01:36:30.461774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.835 [2024-09-28 01:36:30.634926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.094 [2024-09-28 01:36:30.883146] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.094 [2024-09-28 01:36:30.883223] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.354 [2024-09-28 01:36:31.036266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.354 [2024-09-28 01:36:31.036309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:35.354 [2024-09-28 01:36:31.036322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:35.354 [2024-09-28 01:36:31.036333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.354 [2024-09-28 01:36:31.036375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.354 [2024-09-28 01:36:31.036386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:35.354 [2024-09-28 01:36:31.036394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:35.354 [2024-09-28 01:36:31.036402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.354 [2024-09-28 01:36:31.036418] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:35.354 [2024-09-28 01:36:31.037106] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:35.354 [2024-09-28 01:36:31.037122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.354 [2024-09-28 01:36:31.037129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:35.354 [2024-09-28 01:36:31.037137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:23:35.354 [2024-09-28 01:36:31.037144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.354 [2024-09-28 01:36:31.037417] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:23:35.354 [2024-09-28 01:36:31.037438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.354 [2024-09-28 01:36:31.037447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:35.354 [2024-09-28 01:36:31.037455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:35.354 [2024-09-28 01:36:31.037462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.354 [2024-09-28 01:36:31.037496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.354 [2024-09-28 01:36:31.037505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:35.354 [2024-09-28 01:36:31.037513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:35.354 [2024-09-28 01:36:31.037522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.354 [2024-09-28 01:36:31.037769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.354 [2024-09-28 01:36:31.037779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:35.354 [2024-09-28 01:36:31.037786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:23:35.354 [2024-09-28 01:36:31.037793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.354 [2024-09-28 01:36:31.037851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.354 [2024-09-28 01:36:31.037866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:35.354 [2024-09-28 01:36:31.037876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:35.354 [2024-09-28 01:36:31.037883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.354 [2024-09-28 01:36:31.037902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.354 [2024-09-28 01:36:31.037910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:35.354 [2024-09-28 01:36:31.037918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:35.354 [2024-09-28 01:36:31.037925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.354 [2024-09-28 01:36:31.037941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:35.354 [2024-09-28 01:36:31.041417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.354 [2024-09-28 01:36:31.041445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:35.354 [2024-09-28 01:36:31.041453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:23:35.354 [2024-09-28 01:36:31.041461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.354 [2024-09-28 01:36:31.041491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.354 [2024-09-28 01:36:31.041499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:35.354 [2024-09-28 01:36:31.041509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:35.354 [2024-09-28 01:36:31.041516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.354 [2024-09-28 01:36:31.041552] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:35.354 [2024-09-28 01:36:31.041571] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:35.354 [2024-09-28 01:36:31.041604] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:35.354 [2024-09-28 01:36:31.041618] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:35.354 [2024-09-28 01:36:31.041719] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:35.354 [2024-09-28 01:36:31.041731] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:35.354 [2024-09-28 01:36:31.041741] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:35.354 [2024-09-28 01:36:31.041750] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:35.354 [2024-09-28 01:36:31.041759] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:35.354 [2024-09-28 01:36:31.041767] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:35.354 [2024-09-28 01:36:31.041774] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:35.355 [2024-09-28 01:36:31.041780] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:35.355 [2024-09-28 01:36:31.041787] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:35.355 [2024-09-28 01:36:31.041794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.355 [2024-09-28 01:36:31.041801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:35.355 [2024-09-28 01:36:31.041809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:23:35.355 [2024-09-28 01:36:31.041818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.355 [2024-09-28 01:36:31.041901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.355 [2024-09-28 01:36:31.041909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:35.355 [2024-09-28 01:36:31.041916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:35.355 [2024-09-28 01:36:31.041923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.355 [2024-09-28 01:36:31.042022] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:35.355 [2024-09-28 01:36:31.042031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:35.355 [2024-09-28 01:36:31.042040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:35.355 [2024-09-28 01:36:31.042047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:35.355 [2024-09-28 01:36:31.042062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:35.355 [2024-09-28 01:36:31.042077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:35.355 [2024-09-28 01:36:31.042084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:35.355 [2024-09-28 01:36:31.042097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:35.355 [2024-09-28 01:36:31.042104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:35.355 [2024-09-28 01:36:31.042111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:35.355 [2024-09-28 01:36:31.042117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:35.355 [2024-09-28 01:36:31.042124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:35.355 [2024-09-28 01:36:31.042135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:35.355 [2024-09-28 01:36:31.042148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:35.355 [2024-09-28 01:36:31.042154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:35.355 [2024-09-28 01:36:31.042167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.355 [2024-09-28 01:36:31.042180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:35.355 [2024-09-28 01:36:31.042187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.355 [2024-09-28 01:36:31.042212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:35.355 [2024-09-28 01:36:31.042219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.355 [2024-09-28 01:36:31.042232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:35.355 [2024-09-28 01:36:31.042239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.355 [2024-09-28 01:36:31.042252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:35.355 [2024-09-28 01:36:31.042259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:35.355 [2024-09-28 01:36:31.042272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:35.355 [2024-09-28 01:36:31.042279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:35.355 [2024-09-28 01:36:31.042285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:35.355 [2024-09-28 01:36:31.042292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:35.355 [2024-09-28 01:36:31.042298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:35.355 [2024-09-28 01:36:31.042305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:35.355 [2024-09-28 01:36:31.042318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:35.355 [2024-09-28 01:36:31.042325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042332] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:35.355 [2024-09-28 01:36:31.042339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:35.355 [2024-09-28 01:36:31.042346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:35.355 [2024-09-28 01:36:31.042354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.355 [2024-09-28 01:36:31.042361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:35.355 [2024-09-28 01:36:31.042367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:35.355 [2024-09-28 01:36:31.042374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:35.355 [2024-09-28 01:36:31.042381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:35.355 [2024-09-28 01:36:31.042387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:35.355 [2024-09-28 01:36:31.042393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:35.355 [2024-09-28 01:36:31.042401] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:35.355 [2024-09-28 01:36:31.042409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.355 [2024-09-28 01:36:31.042417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:35.355 [2024-09-28 01:36:31.042424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:35.355 [2024-09-28 01:36:31.042432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:35.355 [2024-09-28 01:36:31.042439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:35.355 [2024-09-28 01:36:31.042445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:35.355 [2024-09-28 01:36:31.042452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:35.355 [2024-09-28 01:36:31.042459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:35.355 [2024-09-28 01:36:31.042466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:35.355 [2024-09-28 01:36:31.042473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:35.355 [2024-09-28 01:36:31.042479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:35.355 [2024-09-28 01:36:31.042486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:35.355 [2024-09-28 01:36:31.042493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:35.355 [2024-09-28 01:36:31.042500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:35.355 [2024-09-28 01:36:31.042507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:35.355 [2024-09-28 01:36:31.042515] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:35.355 [2024-09-28 01:36:31.042523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.355 [2024-09-28 01:36:31.042531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:35.355 [2024-09-28 01:36:31.042539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:35.355 [2024-09-28 01:36:31.042546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:35.355 [2024-09-28 01:36:31.042553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:35.355 [2024-09-28 01:36:31.042560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.355 [2024-09-28 01:36:31.042567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:35.355 [2024-09-28 01:36:31.042576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:23:35.355 [2024-09-28 01:36:31.042583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.355 [2024-09-28 01:36:31.081163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.355 [2024-09-28 01:36:31.081212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:35.355 [2024-09-28 01:36:31.081227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.541 ms 00:23:35.355 [2024-09-28 01:36:31.081235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.355 [2024-09-28 01:36:31.081317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.355 [2024-09-28 01:36:31.081327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:35.355 [2024-09-28 01:36:31.081338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:35.355 [2024-09-28 01:36:31.081345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.355 [2024-09-28 01:36:31.111118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.111148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:35.356 [2024-09-28 01:36:31.111159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.721 ms 00:23:35.356 [2024-09-28 01:36:31.111166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.111209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.111218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:35.356 [2024-09-28 01:36:31.111227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:35.356 [2024-09-28 01:36:31.111234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.111314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.111328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:35.356 [2024-09-28 01:36:31.111336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:35.356 [2024-09-28 01:36:31.111343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.111450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.111500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:35.356 [2024-09-28 01:36:31.111510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:35.356 [2024-09-28 01:36:31.111517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.123703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.123802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:35.356 [2024-09-28 01:36:31.123856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.168 ms 00:23:35.356 [2024-09-28 01:36:31.123877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.123995] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:35.356 [2024-09-28 01:36:31.124365] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:35.356 [2024-09-28 01:36:31.124450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.124474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:35.356 [2024-09-28 01:36:31.124519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:23:35.356 [2024-09-28 01:36:31.124542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.136782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.136887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:35.356 [2024-09-28 01:36:31.136934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.203 ms 00:23:35.356 [2024-09-28 01:36:31.136961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.137105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.137132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:35.356 [2024-09-28 01:36:31.137447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:23:35.356 [2024-09-28 01:36:31.137494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.137623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.137692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:35.356 [2024-09-28 01:36:31.137740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:35.356 [2024-09-28 01:36:31.137763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.138387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.138472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:35.356 [2024-09-28 01:36:31.138521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:23:35.356 [2024-09-28 01:36:31.138544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.138573] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:23:35.356 [2024-09-28 01:36:31.138633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.138653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:35.356 [2024-09-28 01:36:31.138672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:35.356 [2024-09-28 01:36:31.138714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.149569] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:35.356 [2024-09-28 01:36:31.149763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.149826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:35.356 [2024-09-28 01:36:31.149871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.017 ms 00:23:35.356 [2024-09-28 01:36:31.149893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.151999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.152024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:35.356 [2024-09-28 01:36:31.152034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:23:35.356 [2024-09-28 01:36:31.152043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.152119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.152133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:35.356 [2024-09-28 01:36:31.152141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:35.356 [2024-09-28 01:36:31.152149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.152170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.152178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:35.356 [2024-09-28 01:36:31.152186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:35.356 [2024-09-28 01:36:31.152203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.152229] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:35.356 [2024-09-28 01:36:31.152238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.152245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:35.356 [2024-09-28 01:36:31.152255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:35.356 [2024-09-28 01:36:31.152262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.175710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.175739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:35.356 [2024-09-28 01:36:31.175750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.429 ms 00:23:35.356 [2024-09-28 01:36:31.175757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.175821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.356 [2024-09-28 01:36:31.175835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:35.356 [2024-09-28 01:36:31.175843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:35.356 [2024-09-28 01:36:31.175850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.356 [2024-09-28 01:36:31.176771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 140.115 ms, result 0 00:24:36.528  Copying: 47/1024 [MB] (47 MBps) Copying: 71/1024 [MB] (24 MBps) Copying: 91/1024 [MB] (20 MBps) Copying: 113/1024 [MB] (21 MBps) Copying: 126/1024 [MB] (12 MBps) Copying: 137/1024 [MB] (11 MBps) Copying: 160/1024 [MB] (23 MBps) Copying: 171/1024 [MB] (10 MBps) Copying: 182/1024 [MB] (10 MBps) Copying: 194/1024 [MB] (11 MBps) Copying: 219/1024 [MB] (24 MBps) Copying: 240/1024 [MB] (21 MBps) Copying: 261/1024 [MB] (21 MBps) Copying: 276/1024 [MB] (14 MBps) Copying: 293/1024 [MB] (17 MBps) Copying: 308/1024 [MB] (14 MBps) Copying: 318/1024 [MB] (10 MBps) Copying: 328/1024 [MB] (10 MBps) Copying: 339/1024 [MB] (10 MBps) Copying: 349/1024 [MB] (10 MBps) Copying: 360/1024 [MB] (11 MBps) Copying: 371/1024 [MB] (10 MBps) Copying: 382/1024 [MB] (10 MBps) Copying: 393/1024 [MB] (10 MBps) Copying: 418/1024 [MB] (25 MBps) Copying: 439/1024 [MB] (20 MBps) Copying: 459/1024 [MB] (19 MBps) Copying: 482/1024 [MB] (23 MBps) Copying: 504/1024 [MB] (22 MBps) Copying: 526/1024 [MB] (21 MBps) Copying: 550/1024 [MB] (24 MBps) Copying: 573/1024 [MB] (22 MBps) Copying: 595/1024 [MB] (22 MBps) Copying: 617/1024 [MB] (22 MBps) Copying: 638/1024 [MB] (20 MBps) Copying: 651/1024 [MB] (13 MBps) Copying: 662/1024 [MB] (10 MBps) Copying: 679/1024 [MB] (17 MBps) Copying: 690/1024 [MB] (10 MBps) Copying: 703/1024 [MB] (12 MBps) Copying: 714/1024 [MB] (11 MBps) Copying: 725/1024 [MB] (10 MBps) Copying: 735/1024 [MB] (10 MBps) Copying: 750/1024 [MB] (14 MBps) Copying: 772/1024 [MB] (22 MBps) Copying: 795/1024 [MB] (22 MBps) Copying: 819/1024 [MB] (24 MBps) Copying: 840/1024 [MB] (21 MBps) Copying: 858/1024 [MB] (18 MBps) Copying: 877/1024 [MB] (18 MBps) Copying: 897/1024 [MB] (19 MBps) Copying: 914/1024 [MB] (17 MBps) Copying: 933/1024 [MB] (18 MBps) Copying: 947/1024 [MB] (13 MBps) Copying: 960/1024 [MB] (13 MBps) Copying: 975/1024 [MB] (14 MBps) Copying: 985/1024 [MB] (10 MBps) Copying: 996/1024 [MB] (10 MBps) Copying: 1007/1024 [MB] (10 MBps) Copying: 1017/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 16 MBps)[2024-09-28 01:37:32.228674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.528 [2024-09-28 01:37:32.228774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:36.528 [2024-09-28 01:37:32.228791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:36.528 [2024-09-28 01:37:32.228800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.528 [2024-09-28 01:37:32.228838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:36.528 [2024-09-28 01:37:32.231932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.528 [2024-09-28 01:37:32.231981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:36.528 [2024-09-28 01:37:32.231993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:24:36.528 [2024-09-28 01:37:32.232002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.528 [2024-09-28 01:37:32.232265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.528 [2024-09-28 01:37:32.232283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:36.528 [2024-09-28 01:37:32.232293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:24:36.528 [2024-09-28 01:37:32.232302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.528 [2024-09-28 01:37:32.232333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.528 [2024-09-28 01:37:32.232345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:24:36.528 [2024-09-28 01:37:32.232355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:36.528 [2024-09-28 01:37:32.232363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.528 [2024-09-28 01:37:32.232423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.528 [2024-09-28 01:37:32.232435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:24:36.528 [2024-09-28 01:37:32.232445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:36.528 [2024-09-28 01:37:32.232454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.528 [2024-09-28 01:37:32.232468] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:36.528 [2024-09-28 01:37:32.232481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:36.528 [2024-09-28 01:37:32.232839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.232999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:36.529 [2024-09-28 01:37:32.233351] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:36.529 [2024-09-28 01:37:32.233359] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a25e5fd9-192f-4882-b55b-cf92b921114e 00:24:36.529 [2024-09-28 01:37:32.233367] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:36.529 [2024-09-28 01:37:32.233375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:24:36.529 [2024-09-28 01:37:32.233385] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:36.529 [2024-09-28 01:37:32.233394] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:36.529 [2024-09-28 01:37:32.233401] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:36.529 [2024-09-28 01:37:32.233412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:36.529 [2024-09-28 01:37:32.233419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:36.529 [2024-09-28 01:37:32.233426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:36.529 [2024-09-28 01:37:32.233432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:36.529 [2024-09-28 01:37:32.233440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.529 [2024-09-28 01:37:32.233449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:36.529 [2024-09-28 01:37:32.233457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:24:36.529 [2024-09-28 01:37:32.233466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.529 [2024-09-28 01:37:32.248411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.529 [2024-09-28 01:37:32.248464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:36.529 [2024-09-28 01:37:32.248478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.927 ms 00:24:36.529 [2024-09-28 01:37:32.248494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.529 [2024-09-28 01:37:32.248917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.529 [2024-09-28 01:37:32.248950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:36.529 [2024-09-28 01:37:32.248961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:24:36.529 [2024-09-28 01:37:32.248970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.529 [2024-09-28 01:37:32.280922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.529 [2024-09-28 01:37:32.280973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.529 [2024-09-28 01:37:32.280986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.529 [2024-09-28 01:37:32.280999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.529 [2024-09-28 01:37:32.281076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.529 [2024-09-28 01:37:32.281087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.529 [2024-09-28 01:37:32.281098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.529 [2024-09-28 01:37:32.281108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.529 [2024-09-28 01:37:32.281166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.529 [2024-09-28 01:37:32.281178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.529 [2024-09-28 01:37:32.281187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.529 [2024-09-28 01:37:32.281221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.529 [2024-09-28 01:37:32.281244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.529 [2024-09-28 01:37:32.281254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.529 [2024-09-28 01:37:32.281264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.529 [2024-09-28 01:37:32.281273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.529 [2024-09-28 01:37:32.366371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.529 [2024-09-28 01:37:32.366418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.529 [2024-09-28 01:37:32.366429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.530 [2024-09-28 01:37:32.366442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.530 [2024-09-28 01:37:32.430924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.530 [2024-09-28 01:37:32.430962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.530 [2024-09-28 01:37:32.430972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.530 [2024-09-28 01:37:32.430980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.530 [2024-09-28 01:37:32.431043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.530 [2024-09-28 01:37:32.431052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.530 [2024-09-28 01:37:32.431060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.530 [2024-09-28 01:37:32.431067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.530 [2024-09-28 01:37:32.431106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.530 [2024-09-28 01:37:32.431115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.530 [2024-09-28 01:37:32.431123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.530 [2024-09-28 01:37:32.431131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.530 [2024-09-28 01:37:32.431280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.530 [2024-09-28 01:37:32.431291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.530 [2024-09-28 01:37:32.431299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.530 [2024-09-28 01:37:32.431307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.530 [2024-09-28 01:37:32.431329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.530 [2024-09-28 01:37:32.431341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:36.530 [2024-09-28 01:37:32.431348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.530 [2024-09-28 01:37:32.431356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.530 [2024-09-28 01:37:32.431388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.530 [2024-09-28 01:37:32.431396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.530 [2024-09-28 01:37:32.431404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.530 [2024-09-28 01:37:32.431411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.530 [2024-09-28 01:37:32.431447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.530 [2024-09-28 01:37:32.431460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.530 [2024-09-28 01:37:32.431468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.530 [2024-09-28 01:37:32.431475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.530 [2024-09-28 01:37:32.431581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 202.889 ms, result 0 00:24:37.472 00:24:37.472 00:24:37.472 01:37:33 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:40.026 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:40.026 01:37:35 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:40.026 [2024-09-28 01:37:35.498083] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:40.026 [2024-09-28 01:37:35.498172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79758 ] 00:24:40.026 [2024-09-28 01:37:35.642077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.026 [2024-09-28 01:37:35.855325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.288 [2024-09-28 01:37:36.145110] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:40.288 [2024-09-28 01:37:36.145210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:40.551 [2024-09-28 01:37:36.308760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.308835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:40.551 [2024-09-28 01:37:36.308851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:40.551 [2024-09-28 01:37:36.308864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.308922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.308933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:40.551 [2024-09-28 01:37:36.308942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:40.551 [2024-09-28 01:37:36.308951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.308972] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:40.551 [2024-09-28 01:37:36.309756] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:40.551 [2024-09-28 01:37:36.309786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.309795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:40.551 [2024-09-28 01:37:36.309805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:24:40.551 [2024-09-28 01:37:36.309813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.310120] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:24:40.551 [2024-09-28 01:37:36.310149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.310159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:40.551 [2024-09-28 01:37:36.310170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:40.551 [2024-09-28 01:37:36.310178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.310260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.310273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:40.551 [2024-09-28 01:37:36.310282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:40.551 [2024-09-28 01:37:36.310293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.310577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.310598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:40.551 [2024-09-28 01:37:36.310606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:24:40.551 [2024-09-28 01:37:36.310616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.310734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.310747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:40.551 [2024-09-28 01:37:36.310758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:40.551 [2024-09-28 01:37:36.310767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.310790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.310800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:40.551 [2024-09-28 01:37:36.310809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:40.551 [2024-09-28 01:37:36.310818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.310839] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:40.551 [2024-09-28 01:37:36.315142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.315185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:40.551 [2024-09-28 01:37:36.315205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:24:40.551 [2024-09-28 01:37:36.315213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.315250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.315259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:40.551 [2024-09-28 01:37:36.315272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:40.551 [2024-09-28 01:37:36.315280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.315337] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:40.551 [2024-09-28 01:37:36.315361] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:40.551 [2024-09-28 01:37:36.315403] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:40.551 [2024-09-28 01:37:36.315420] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:40.551 [2024-09-28 01:37:36.315525] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:40.551 [2024-09-28 01:37:36.315540] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:40.551 [2024-09-28 01:37:36.315552] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:40.551 [2024-09-28 01:37:36.315563] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:40.551 [2024-09-28 01:37:36.315573] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:40.551 [2024-09-28 01:37:36.315582] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:40.551 [2024-09-28 01:37:36.315590] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:40.551 [2024-09-28 01:37:36.315599] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:40.551 [2024-09-28 01:37:36.315606] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:40.551 [2024-09-28 01:37:36.315615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.315624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:40.551 [2024-09-28 01:37:36.315634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:24:40.551 [2024-09-28 01:37:36.315644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.315730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.551 [2024-09-28 01:37:36.315741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:40.551 [2024-09-28 01:37:36.315748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:40.551 [2024-09-28 01:37:36.315755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.551 [2024-09-28 01:37:36.315857] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:40.551 [2024-09-28 01:37:36.315869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:40.551 [2024-09-28 01:37:36.315878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:40.551 [2024-09-28 01:37:36.315886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.551 [2024-09-28 01:37:36.315896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:40.551 [2024-09-28 01:37:36.315903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:40.551 [2024-09-28 01:37:36.315912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:40.551 [2024-09-28 01:37:36.315919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:40.552 [2024-09-28 01:37:36.315927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:40.552 [2024-09-28 01:37:36.315933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:40.552 [2024-09-28 01:37:36.315943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:40.552 [2024-09-28 01:37:36.315950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:40.552 [2024-09-28 01:37:36.315957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:40.552 [2024-09-28 01:37:36.315964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:40.552 [2024-09-28 01:37:36.315971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:40.552 [2024-09-28 01:37:36.315983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.552 [2024-09-28 01:37:36.315989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:40.552 [2024-09-28 01:37:36.315996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:40.552 [2024-09-28 01:37:36.316003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.552 [2024-09-28 01:37:36.316010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:40.552 [2024-09-28 01:37:36.316018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:40.552 [2024-09-28 01:37:36.316026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:40.552 [2024-09-28 01:37:36.316032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:40.552 [2024-09-28 01:37:36.316038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:40.552 [2024-09-28 01:37:36.316046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:40.552 [2024-09-28 01:37:36.316053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:40.552 [2024-09-28 01:37:36.316060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:40.552 [2024-09-28 01:37:36.316066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:40.552 [2024-09-28 01:37:36.316075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:40.552 [2024-09-28 01:37:36.316083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:40.552 [2024-09-28 01:37:36.316089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:40.552 [2024-09-28 01:37:36.316096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:40.552 [2024-09-28 01:37:36.316102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:40.552 [2024-09-28 01:37:36.316130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:40.552 [2024-09-28 01:37:36.316137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:40.552 [2024-09-28 01:37:36.316144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:40.552 [2024-09-28 01:37:36.316150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:40.552 [2024-09-28 01:37:36.316156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:40.552 [2024-09-28 01:37:36.316163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:40.552 [2024-09-28 01:37:36.316171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.552 [2024-09-28 01:37:36.316179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:40.552 [2024-09-28 01:37:36.316186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:40.552 [2024-09-28 01:37:36.316212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.552 [2024-09-28 01:37:36.316219] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:40.552 [2024-09-28 01:37:36.316228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:40.552 [2024-09-28 01:37:36.316236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:40.552 [2024-09-28 01:37:36.316244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:40.552 [2024-09-28 01:37:36.316252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:40.552 [2024-09-28 01:37:36.316259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:40.552 [2024-09-28 01:37:36.316267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:40.552 [2024-09-28 01:37:36.316275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:40.552 [2024-09-28 01:37:36.316282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:40.552 [2024-09-28 01:37:36.316289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:40.552 [2024-09-28 01:37:36.316297] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:40.552 [2024-09-28 01:37:36.316307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:40.552 [2024-09-28 01:37:36.316316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:40.552 [2024-09-28 01:37:36.316336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:40.552 [2024-09-28 01:37:36.316345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:40.552 [2024-09-28 01:37:36.316354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:40.552 [2024-09-28 01:37:36.316361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:40.552 [2024-09-28 01:37:36.316368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:40.552 [2024-09-28 01:37:36.316375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:40.552 [2024-09-28 01:37:36.316382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:40.552 [2024-09-28 01:37:36.316390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:40.552 [2024-09-28 01:37:36.316397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:40.552 [2024-09-28 01:37:36.316405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:40.552 [2024-09-28 01:37:36.316412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:40.552 [2024-09-28 01:37:36.316420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:40.552 [2024-09-28 01:37:36.316429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:40.552 [2024-09-28 01:37:36.316436] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:40.552 [2024-09-28 01:37:36.316445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:40.552 [2024-09-28 01:37:36.316454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:40.552 [2024-09-28 01:37:36.316461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:40.552 [2024-09-28 01:37:36.316469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:40.552 [2024-09-28 01:37:36.316479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:40.552 [2024-09-28 01:37:36.316488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.552 [2024-09-28 01:37:36.316496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:40.552 [2024-09-28 01:37:36.316507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:24:40.552 [2024-09-28 01:37:36.316514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.552 [2024-09-28 01:37:36.357274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.552 [2024-09-28 01:37:36.357328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:40.552 [2024-09-28 01:37:36.357345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.715 ms 00:24:40.552 [2024-09-28 01:37:36.357353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.552 [2024-09-28 01:37:36.357446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.552 [2024-09-28 01:37:36.357457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:40.552 [2024-09-28 01:37:36.357470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:40.552 [2024-09-28 01:37:36.357478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.552 [2024-09-28 01:37:36.393497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.552 [2024-09-28 01:37:36.393547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:40.552 [2024-09-28 01:37:36.393559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.951 ms 00:24:40.552 [2024-09-28 01:37:36.393567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.552 [2024-09-28 01:37:36.393606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.552 [2024-09-28 01:37:36.393615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:40.552 [2024-09-28 01:37:36.393624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:40.552 [2024-09-28 01:37:36.393631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.552 [2024-09-28 01:37:36.393735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.552 [2024-09-28 01:37:36.393753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:40.552 [2024-09-28 01:37:36.393763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:40.552 [2024-09-28 01:37:36.393772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.552 [2024-09-28 01:37:36.393899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.552 [2024-09-28 01:37:36.393908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:40.552 [2024-09-28 01:37:36.393919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:24:40.552 [2024-09-28 01:37:36.393928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.552 [2024-09-28 01:37:36.408674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.552 [2024-09-28 01:37:36.408722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:40.552 [2024-09-28 01:37:36.408733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.724 ms 00:24:40.552 [2024-09-28 01:37:36.408741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.552 [2024-09-28 01:37:36.408907] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:40.552 [2024-09-28 01:37:36.408922] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:40.553 [2024-09-28 01:37:36.408933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.408941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:40.553 [2024-09-28 01:37:36.408951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:40.553 [2024-09-28 01:37:36.408959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.421360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.421421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:40.553 [2024-09-28 01:37:36.421434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.379 ms 00:24:40.553 [2024-09-28 01:37:36.421448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.421580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.421589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:40.553 [2024-09-28 01:37:36.421599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:24:40.553 [2024-09-28 01:37:36.421608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.421662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.421673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:40.553 [2024-09-28 01:37:36.421682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:24:40.553 [2024-09-28 01:37:36.421690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.422312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.422337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:40.553 [2024-09-28 01:37:36.422347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:24:40.553 [2024-09-28 01:37:36.422355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.422376] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:24:40.553 [2024-09-28 01:37:36.422387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.422396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:40.553 [2024-09-28 01:37:36.422406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:40.553 [2024-09-28 01:37:36.422414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.435800] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:40.553 [2024-09-28 01:37:36.435974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.435990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:40.553 [2024-09-28 01:37:36.436001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.540 ms 00:24:40.553 [2024-09-28 01:37:36.436009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.438170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.438213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:40.553 [2024-09-28 01:37:36.438223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:24:40.553 [2024-09-28 01:37:36.438231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.438327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.438344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:40.553 [2024-09-28 01:37:36.438355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:40.553 [2024-09-28 01:37:36.438364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.438387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.438398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:40.553 [2024-09-28 01:37:36.438407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:40.553 [2024-09-28 01:37:36.438415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.438446] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:40.553 [2024-09-28 01:37:36.438457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.438465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:40.553 [2024-09-28 01:37:36.438476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:40.553 [2024-09-28 01:37:36.438483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.465019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.465071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:40.553 [2024-09-28 01:37:36.465083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.514 ms 00:24:40.553 [2024-09-28 01:37:36.465092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.465189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.553 [2024-09-28 01:37:36.465223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:40.553 [2024-09-28 01:37:36.465234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:40.553 [2024-09-28 01:37:36.465242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.553 [2024-09-28 01:37:36.466431] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.150 ms, result 0 00:25:26.525  Copying: 13/1024 [MB] (13 MBps) Copying: 31/1024 [MB] (17 MBps) Copying: 42/1024 [MB] (10 MBps) Copying: 52/1024 [MB] (10 MBps) Copying: 69/1024 [MB] (16 MBps) Copying: 84/1024 [MB] (14 MBps) Copying: 108/1024 [MB] (24 MBps) Copying: 142/1024 [MB] (34 MBps) Copying: 173/1024 [MB] (31 MBps) Copying: 186/1024 [MB] (12 MBps) Copying: 211/1024 [MB] (25 MBps) Copying: 253/1024 [MB] (42 MBps) Copying: 308/1024 [MB] (54 MBps) Copying: 330/1024 [MB] (22 MBps) Copying: 345/1024 [MB] (14 MBps) Copying: 368/1024 [MB] (23 MBps) Copying: 385/1024 [MB] (16 MBps) Copying: 400/1024 [MB] (14 MBps) Copying: 417/1024 [MB] (17 MBps) Copying: 463/1024 [MB] (46 MBps) Copying: 519/1024 [MB] (55 MBps) Copying: 566/1024 [MB] (46 MBps) Copying: 586/1024 [MB] (20 MBps) Copying: 607/1024 [MB] (20 MBps) Copying: 618/1024 [MB] (11 MBps) Copying: 655/1024 [MB] (36 MBps) Copying: 675/1024 [MB] (20 MBps) Copying: 695/1024 [MB] (19 MBps) Copying: 714/1024 [MB] (18 MBps) Copying: 730/1024 [MB] (16 MBps) Copying: 743/1024 [MB] (12 MBps) Copying: 762/1024 [MB] (18 MBps) Copying: 783/1024 [MB] (21 MBps) Copying: 797/1024 [MB] (14 MBps) Copying: 816/1024 [MB] (18 MBps) Copying: 835/1024 [MB] (18 MBps) Copying: 853/1024 [MB] (18 MBps) Copying: 868/1024 [MB] (15 MBps) Copying: 879/1024 [MB] (10 MBps) Copying: 931/1024 [MB] (51 MBps) Copying: 961/1024 [MB] (30 MBps) Copying: 976/1024 [MB] (15 MBps) Copying: 991/1024 [MB] (14 MBps) Copying: 1007/1024 [MB] (15 MBps) Copying: 1023/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-09-28 01:38:22.348547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.525 [2024-09-28 01:38:22.348634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:26.525 [2024-09-28 01:38:22.348655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:26.525 [2024-09-28 01:38:22.348666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.525 [2024-09-28 01:38:22.351808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:26.525 [2024-09-28 01:38:22.356639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.525 [2024-09-28 01:38:22.356691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:26.525 [2024-09-28 01:38:22.356705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.786 ms 00:25:26.525 [2024-09-28 01:38:22.356716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.525 [2024-09-28 01:38:22.367568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.525 [2024-09-28 01:38:22.367618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:26.525 [2024-09-28 01:38:22.367631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.628 ms 00:25:26.525 [2024-09-28 01:38:22.367641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.525 [2024-09-28 01:38:22.367688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.525 [2024-09-28 01:38:22.367699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:25:26.525 [2024-09-28 01:38:22.367714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:26.525 [2024-09-28 01:38:22.367724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.525 [2024-09-28 01:38:22.367793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.525 [2024-09-28 01:38:22.367806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:25:26.525 [2024-09-28 01:38:22.367816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:26.525 [2024-09-28 01:38:22.367824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.525 [2024-09-28 01:38:22.367838] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:26.525 [2024-09-28 01:38:22.367852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127488 / 261120 wr_cnt: 1 state: open 00:25:26.525 [2024-09-28 01:38:22.367862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.367996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:26.525 [2024-09-28 01:38:22.368004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:26.526 [2024-09-28 01:38:22.368709] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:26.526 [2024-09-28 01:38:22.368717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a25e5fd9-192f-4882-b55b-cf92b921114e 00:25:26.526 [2024-09-28 01:38:22.368730] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127488 00:25:26.526 [2024-09-28 01:38:22.368738] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127520 00:25:26.526 [2024-09-28 01:38:22.368745] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127488 00:25:26.526 [2024-09-28 01:38:22.368753] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:25:26.526 [2024-09-28 01:38:22.368762] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:26.526 [2024-09-28 01:38:22.368771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:26.526 [2024-09-28 01:38:22.368780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:26.526 [2024-09-28 01:38:22.368787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:26.526 [2024-09-28 01:38:22.368793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:26.527 [2024-09-28 01:38:22.368800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.527 [2024-09-28 01:38:22.368826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:26.527 [2024-09-28 01:38:22.368836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:25:26.527 [2024-09-28 01:38:22.368844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.527 [2024-09-28 01:38:22.383616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.527 [2024-09-28 01:38:22.383661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:26.527 [2024-09-28 01:38:22.383674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.754 ms 00:25:26.527 [2024-09-28 01:38:22.383684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.527 [2024-09-28 01:38:22.384125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.527 [2024-09-28 01:38:22.384146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:26.527 [2024-09-28 01:38:22.384156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:25:26.527 [2024-09-28 01:38:22.384168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.527 [2024-09-28 01:38:22.418455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.527 [2024-09-28 01:38:22.418500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:26.527 [2024-09-28 01:38:22.418512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.527 [2024-09-28 01:38:22.418521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.527 [2024-09-28 01:38:22.418593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.527 [2024-09-28 01:38:22.418604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:26.527 [2024-09-28 01:38:22.418613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.527 [2024-09-28 01:38:22.418630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.527 [2024-09-28 01:38:22.418695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.527 [2024-09-28 01:38:22.418708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:26.527 [2024-09-28 01:38:22.418718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.527 [2024-09-28 01:38:22.418727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.527 [2024-09-28 01:38:22.418746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.527 [2024-09-28 01:38:22.418756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:26.527 [2024-09-28 01:38:22.418765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.527 [2024-09-28 01:38:22.418774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.789 [2024-09-28 01:38:22.511591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.789 [2024-09-28 01:38:22.511653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:26.789 [2024-09-28 01:38:22.511676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.789 [2024-09-28 01:38:22.511686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.789 [2024-09-28 01:38:22.587430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.789 [2024-09-28 01:38:22.587493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:26.789 [2024-09-28 01:38:22.587507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.789 [2024-09-28 01:38:22.587525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.789 [2024-09-28 01:38:22.587634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.789 [2024-09-28 01:38:22.587647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:26.789 [2024-09-28 01:38:22.587658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.789 [2024-09-28 01:38:22.587666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.789 [2024-09-28 01:38:22.587713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.789 [2024-09-28 01:38:22.587723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:26.789 [2024-09-28 01:38:22.587733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.789 [2024-09-28 01:38:22.587741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.789 [2024-09-28 01:38:22.587833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.789 [2024-09-28 01:38:22.587848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:26.789 [2024-09-28 01:38:22.587857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.789 [2024-09-28 01:38:22.587866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.789 [2024-09-28 01:38:22.587899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.789 [2024-09-28 01:38:22.587912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:26.789 [2024-09-28 01:38:22.587921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.789 [2024-09-28 01:38:22.587931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.789 [2024-09-28 01:38:22.587986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.789 [2024-09-28 01:38:22.587999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:26.789 [2024-09-28 01:38:22.588008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.789 [2024-09-28 01:38:22.588016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.789 [2024-09-28 01:38:22.588084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.789 [2024-09-28 01:38:22.588096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:26.789 [2024-09-28 01:38:22.588107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.789 [2024-09-28 01:38:22.588118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.789 [2024-09-28 01:38:22.588312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 242.032 ms, result 0 00:25:28.177 00:25:28.177 00:25:28.439 01:38:24 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:28.439 [2024-09-28 01:38:24.210733] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:28.439 [2024-09-28 01:38:24.210881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80245 ] 00:25:28.439 [2024-09-28 01:38:24.367997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.014 [2024-09-28 01:38:24.635581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.278 [2024-09-28 01:38:24.966824] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:29.278 [2024-09-28 01:38:24.966921] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:29.278 [2024-09-28 01:38:25.132490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.278 [2024-09-28 01:38:25.132557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:29.278 [2024-09-28 01:38:25.132573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:29.278 [2024-09-28 01:38:25.132587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.278 [2024-09-28 01:38:25.132648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.278 [2024-09-28 01:38:25.132659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.278 [2024-09-28 01:38:25.132668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:29.278 [2024-09-28 01:38:25.132677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.278 [2024-09-28 01:38:25.132698] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:29.278 [2024-09-28 01:38:25.133600] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:29.278 [2024-09-28 01:38:25.133661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.278 [2024-09-28 01:38:25.133670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.278 [2024-09-28 01:38:25.133682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:25:29.278 [2024-09-28 01:38:25.133691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.278 [2024-09-28 01:38:25.134057] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:25:29.278 [2024-09-28 01:38:25.134107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.278 [2024-09-28 01:38:25.134118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:29.278 [2024-09-28 01:38:25.134129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:29.278 [2024-09-28 01:38:25.134138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.278 [2024-09-28 01:38:25.134231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.278 [2024-09-28 01:38:25.134251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:29.278 [2024-09-28 01:38:25.134261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:29.278 [2024-09-28 01:38:25.134272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.278 [2024-09-28 01:38:25.134566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.278 [2024-09-28 01:38:25.134591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.278 [2024-09-28 01:38:25.134603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:25:29.278 [2024-09-28 01:38:25.134612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.278 [2024-09-28 01:38:25.134688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.278 [2024-09-28 01:38:25.134702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.278 [2024-09-28 01:38:25.134715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:29.278 [2024-09-28 01:38:25.134724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.278 [2024-09-28 01:38:25.134751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.278 [2024-09-28 01:38:25.134763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:29.278 [2024-09-28 01:38:25.134774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:29.278 [2024-09-28 01:38:25.134783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.278 [2024-09-28 01:38:25.134806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:29.278 [2024-09-28 01:38:25.139818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.278 [2024-09-28 01:38:25.139861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.278 [2024-09-28 01:38:25.139872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.016 ms 00:25:29.278 [2024-09-28 01:38:25.139881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.278 [2024-09-28 01:38:25.139921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.278 [2024-09-28 01:38:25.139930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:29.278 [2024-09-28 01:38:25.139943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:29.278 [2024-09-28 01:38:25.139951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.278 [2024-09-28 01:38:25.140010] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:29.278 [2024-09-28 01:38:25.140041] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:29.278 [2024-09-28 01:38:25.140081] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:29.278 [2024-09-28 01:38:25.140099] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:29.278 [2024-09-28 01:38:25.140226] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:29.278 [2024-09-28 01:38:25.140243] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:29.278 [2024-09-28 01:38:25.140255] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:29.278 [2024-09-28 01:38:25.140267] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:29.278 [2024-09-28 01:38:25.140276] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:29.278 [2024-09-28 01:38:25.140286] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:29.278 [2024-09-28 01:38:25.140295] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:29.278 [2024-09-28 01:38:25.140303] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:29.278 [2024-09-28 01:38:25.140311] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:29.279 [2024-09-28 01:38:25.140320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.279 [2024-09-28 01:38:25.140328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:29.279 [2024-09-28 01:38:25.140337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:25:29.279 [2024-09-28 01:38:25.140349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.279 [2024-09-28 01:38:25.140435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.279 [2024-09-28 01:38:25.140446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:29.279 [2024-09-28 01:38:25.140454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:29.279 [2024-09-28 01:38:25.140463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.279 [2024-09-28 01:38:25.140569] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:29.279 [2024-09-28 01:38:25.140581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:29.279 [2024-09-28 01:38:25.140590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:29.279 [2024-09-28 01:38:25.140601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:29.279 [2024-09-28 01:38:25.140621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:29.279 [2024-09-28 01:38:25.140636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:29.279 [2024-09-28 01:38:25.140643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:29.279 [2024-09-28 01:38:25.140657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:29.279 [2024-09-28 01:38:25.140670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:29.279 [2024-09-28 01:38:25.140678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:29.279 [2024-09-28 01:38:25.140684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:29.279 [2024-09-28 01:38:25.140692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:29.279 [2024-09-28 01:38:25.140705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:29.279 [2024-09-28 01:38:25.140720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:29.279 [2024-09-28 01:38:25.140727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:29.279 [2024-09-28 01:38:25.140743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.279 [2024-09-28 01:38:25.140757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:29.279 [2024-09-28 01:38:25.140764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.279 [2024-09-28 01:38:25.140777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:29.279 [2024-09-28 01:38:25.140784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.279 [2024-09-28 01:38:25.140799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:29.279 [2024-09-28 01:38:25.140806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:29.279 [2024-09-28 01:38:25.140852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:29.279 [2024-09-28 01:38:25.140859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:29.279 [2024-09-28 01:38:25.140873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:29.279 [2024-09-28 01:38:25.140881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:29.279 [2024-09-28 01:38:25.140887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:29.279 [2024-09-28 01:38:25.140894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:29.279 [2024-09-28 01:38:25.140901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:29.279 [2024-09-28 01:38:25.140908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:29.279 [2024-09-28 01:38:25.140922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:29.279 [2024-09-28 01:38:25.140932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140942] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:29.279 [2024-09-28 01:38:25.140953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:29.279 [2024-09-28 01:38:25.140961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:29.279 [2024-09-28 01:38:25.140970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:29.279 [2024-09-28 01:38:25.140979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:29.279 [2024-09-28 01:38:25.140990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:29.279 [2024-09-28 01:38:25.140997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:29.279 [2024-09-28 01:38:25.141007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:29.279 [2024-09-28 01:38:25.141014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:29.279 [2024-09-28 01:38:25.141023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:29.279 [2024-09-28 01:38:25.141033] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:29.279 [2024-09-28 01:38:25.141043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:29.279 [2024-09-28 01:38:25.141052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:29.279 [2024-09-28 01:38:25.141060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:29.279 [2024-09-28 01:38:25.141068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:29.279 [2024-09-28 01:38:25.141074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:29.279 [2024-09-28 01:38:25.141081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:29.279 [2024-09-28 01:38:25.141088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:29.279 [2024-09-28 01:38:25.141095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:29.279 [2024-09-28 01:38:25.141102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:29.279 [2024-09-28 01:38:25.141111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:29.279 [2024-09-28 01:38:25.141118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:29.279 [2024-09-28 01:38:25.141125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:29.279 [2024-09-28 01:38:25.141132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:29.279 [2024-09-28 01:38:25.141139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:29.279 [2024-09-28 01:38:25.141146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:29.279 [2024-09-28 01:38:25.141153] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:29.279 [2024-09-28 01:38:25.141164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:29.279 [2024-09-28 01:38:25.141173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:29.279 [2024-09-28 01:38:25.141180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:29.279 [2024-09-28 01:38:25.141187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:29.279 [2024-09-28 01:38:25.141209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:29.279 [2024-09-28 01:38:25.141219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.279 [2024-09-28 01:38:25.141228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:29.279 [2024-09-28 01:38:25.141242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:25:29.279 [2024-09-28 01:38:25.141252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.279 [2024-09-28 01:38:25.181494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.279 [2024-09-28 01:38:25.181549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.279 [2024-09-28 01:38:25.181567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.195 ms 00:25:29.279 [2024-09-28 01:38:25.181576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.279 [2024-09-28 01:38:25.181677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.279 [2024-09-28 01:38:25.181688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:29.279 [2024-09-28 01:38:25.181701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:29.279 [2024-09-28 01:38:25.181711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.542 [2024-09-28 01:38:25.221580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.542 [2024-09-28 01:38:25.221629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.542 [2024-09-28 01:38:25.221641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.801 ms 00:25:29.542 [2024-09-28 01:38:25.221650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.542 [2024-09-28 01:38:25.221690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.542 [2024-09-28 01:38:25.221699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.542 [2024-09-28 01:38:25.221709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:29.542 [2024-09-28 01:38:25.221718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.542 [2024-09-28 01:38:25.221829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.542 [2024-09-28 01:38:25.221848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.542 [2024-09-28 01:38:25.221858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:29.542 [2024-09-28 01:38:25.221867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.542 [2024-09-28 01:38:25.222004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.542 [2024-09-28 01:38:25.222016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.542 [2024-09-28 01:38:25.222026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:25:29.542 [2024-09-28 01:38:25.222038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.542 [2024-09-28 01:38:25.238944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.542 [2024-09-28 01:38:25.238991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.542 [2024-09-28 01:38:25.239004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.884 ms 00:25:29.542 [2024-09-28 01:38:25.239013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.542 [2024-09-28 01:38:25.239183] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:29.542 [2024-09-28 01:38:25.239219] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:29.542 [2024-09-28 01:38:25.239232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.542 [2024-09-28 01:38:25.239241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:29.542 [2024-09-28 01:38:25.239251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:29.542 [2024-09-28 01:38:25.239259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.251573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.251618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:29.543 [2024-09-28 01:38:25.251630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.293 ms 00:25:29.543 [2024-09-28 01:38:25.251644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.251781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.251792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:29.543 [2024-09-28 01:38:25.251803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:25:29.543 [2024-09-28 01:38:25.251814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.251867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.251879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:29.543 [2024-09-28 01:38:25.251888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:25:29.543 [2024-09-28 01:38:25.251896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.252537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.252572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:29.543 [2024-09-28 01:38:25.252582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:25:29.543 [2024-09-28 01:38:25.252590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.252609] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:25:29.543 [2024-09-28 01:38:25.252621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.252630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:29.543 [2024-09-28 01:38:25.252639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:29.543 [2024-09-28 01:38:25.252647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.266791] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:29.543 [2024-09-28 01:38:25.266960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.266977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:29.543 [2024-09-28 01:38:25.266989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.293 ms 00:25:29.543 [2024-09-28 01:38:25.266999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.269387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.269424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:29.543 [2024-09-28 01:38:25.269435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.363 ms 00:25:29.543 [2024-09-28 01:38:25.269443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.269527] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:25:29.543 [2024-09-28 01:38:25.269985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.270052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:29.543 [2024-09-28 01:38:25.270063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:25:29.543 [2024-09-28 01:38:25.270070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.270099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.270111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:29.543 [2024-09-28 01:38:25.270120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:29.543 [2024-09-28 01:38:25.270129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.270172] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:29.543 [2024-09-28 01:38:25.270186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.270216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:29.543 [2024-09-28 01:38:25.270226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:29.543 [2024-09-28 01:38:25.270236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.298022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.298073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:29.543 [2024-09-28 01:38:25.298087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.766 ms 00:25:29.543 [2024-09-28 01:38:25.298096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.298218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.543 [2024-09-28 01:38:25.298230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:29.543 [2024-09-28 01:38:25.298240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:29.543 [2024-09-28 01:38:25.298251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.543 [2024-09-28 01:38:25.299676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 166.633 ms, result 0 00:26:34.014  Copying: 14/1024 [MB] (14 MBps) Copying: 28/1024 [MB] (14 MBps) Copying: 39/1024 [MB] (10 MBps) Copying: 49/1024 [MB] (10 MBps) Copying: 60/1024 [MB] (10 MBps) Copying: 70/1024 [MB] (10 MBps) Copying: 81/1024 [MB] (10 MBps) Copying: 91/1024 [MB] (10 MBps) Copying: 102/1024 [MB] (10 MBps) Copying: 119/1024 [MB] (17 MBps) Copying: 140/1024 [MB] (20 MBps) Copying: 156/1024 [MB] (15 MBps) Copying: 175/1024 [MB] (19 MBps) Copying: 198/1024 [MB] (22 MBps) Copying: 217/1024 [MB] (18 MBps) Copying: 236/1024 [MB] (19 MBps) Copying: 258/1024 [MB] (21 MBps) Copying: 280/1024 [MB] (21 MBps) Copying: 301/1024 [MB] (21 MBps) Copying: 312/1024 [MB] (10 MBps) Copying: 324/1024 [MB] (11 MBps) Copying: 342/1024 [MB] (18 MBps) Copying: 363/1024 [MB] (20 MBps) Copying: 375/1024 [MB] (12 MBps) Copying: 398/1024 [MB] (22 MBps) Copying: 417/1024 [MB] (19 MBps) Copying: 438/1024 [MB] (20 MBps) Copying: 454/1024 [MB] (16 MBps) Copying: 465/1024 [MB] (10 MBps) Copying: 476/1024 [MB] (10 MBps) Copying: 487/1024 [MB] (11 MBps) Copying: 498/1024 [MB] (11 MBps) Copying: 515/1024 [MB] (16 MBps) Copying: 531/1024 [MB] (16 MBps) Copying: 547/1024 [MB] (15 MBps) Copying: 562/1024 [MB] (14 MBps) Copying: 578/1024 [MB] (16 MBps) Copying: 595/1024 [MB] (17 MBps) Copying: 615/1024 [MB] (20 MBps) Copying: 633/1024 [MB] (17 MBps) Copying: 653/1024 [MB] (19 MBps) Copying: 670/1024 [MB] (16 MBps) Copying: 689/1024 [MB] (19 MBps) Copying: 707/1024 [MB] (17 MBps) Copying: 727/1024 [MB] (20 MBps) Copying: 751/1024 [MB] (23 MBps) Copying: 767/1024 [MB] (16 MBps) Copying: 785/1024 [MB] (17 MBps) Copying: 807/1024 [MB] (21 MBps) Copying: 818/1024 [MB] (11 MBps) Copying: 830/1024 [MB] (11 MBps) Copying: 841/1024 [MB] (11 MBps) Copying: 853/1024 [MB] (11 MBps) Copying: 864/1024 [MB] (10 MBps) Copying: 874/1024 [MB] (10 MBps) Copying: 885/1024 [MB] (10 MBps) Copying: 895/1024 [MB] (10 MBps) Copying: 917/1024 [MB] (21 MBps) Copying: 935/1024 [MB] (18 MBps) Copying: 953/1024 [MB] (18 MBps) Copying: 972/1024 [MB] (18 MBps) Copying: 985/1024 [MB] (12 MBps) Copying: 1005/1024 [MB] (20 MBps) Copying: 1024/1024 [MB] (average 16 MBps)[2024-09-28 01:39:29.819570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.014 [2024-09-28 01:39:29.819667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:34.014 [2024-09-28 01:39:29.819685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:34.014 [2024-09-28 01:39:29.819694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.014 [2024-09-28 01:39:29.819718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:34.014 [2024-09-28 01:39:29.822933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.014 [2024-09-28 01:39:29.822969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:34.014 [2024-09-28 01:39:29.822982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.194 ms 00:26:34.014 [2024-09-28 01:39:29.822991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.014 [2024-09-28 01:39:29.823249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.014 [2024-09-28 01:39:29.823261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:34.014 [2024-09-28 01:39:29.823272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:26:34.014 [2024-09-28 01:39:29.823280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.014 [2024-09-28 01:39:29.823312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.014 [2024-09-28 01:39:29.823326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:34.014 [2024-09-28 01:39:29.823335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:34.014 [2024-09-28 01:39:29.823343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.014 [2024-09-28 01:39:29.823416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.014 [2024-09-28 01:39:29.823426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:34.014 [2024-09-28 01:39:29.823435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:34.014 [2024-09-28 01:39:29.823443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.014 [2024-09-28 01:39:29.823457] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:34.014 [2024-09-28 01:39:29.823471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:26:34.014 [2024-09-28 01:39:29.823481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.823992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:34.014 [2024-09-28 01:39:29.824000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:34.015 [2024-09-28 01:39:29.824308] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:34.015 [2024-09-28 01:39:29.824319] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a25e5fd9-192f-4882-b55b-cf92b921114e 00:26:34.015 [2024-09-28 01:39:29.824327] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:26:34.015 [2024-09-28 01:39:29.824335] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:26:34.015 [2024-09-28 01:39:29.824342] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:26:34.015 [2024-09-28 01:39:29.824351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:26:34.015 [2024-09-28 01:39:29.824359] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:34.015 [2024-09-28 01:39:29.824367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:34.015 [2024-09-28 01:39:29.824378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:34.015 [2024-09-28 01:39:29.824385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:34.015 [2024-09-28 01:39:29.824392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:34.015 [2024-09-28 01:39:29.824399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.015 [2024-09-28 01:39:29.824407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:34.015 [2024-09-28 01:39:29.824414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:26:34.015 [2024-09-28 01:39:29.824423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.015 [2024-09-28 01:39:29.839991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.015 [2024-09-28 01:39:29.840032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:34.015 [2024-09-28 01:39:29.840044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.549 ms 00:26:34.015 [2024-09-28 01:39:29.840053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.015 [2024-09-28 01:39:29.840953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.015 [2024-09-28 01:39:29.840976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:34.015 [2024-09-28 01:39:29.840993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:26:34.015 [2024-09-28 01:39:29.841002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.015 [2024-09-28 01:39:29.874441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.015 [2024-09-28 01:39:29.874486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:34.015 [2024-09-28 01:39:29.874499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.015 [2024-09-28 01:39:29.874509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.015 [2024-09-28 01:39:29.874587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.015 [2024-09-28 01:39:29.874597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:34.015 [2024-09-28 01:39:29.874611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.015 [2024-09-28 01:39:29.874621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.015 [2024-09-28 01:39:29.874684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.015 [2024-09-28 01:39:29.874695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:34.015 [2024-09-28 01:39:29.874705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.015 [2024-09-28 01:39:29.874714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.015 [2024-09-28 01:39:29.874734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.015 [2024-09-28 01:39:29.874743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:34.015 [2024-09-28 01:39:29.874753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.015 [2024-09-28 01:39:29.874764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.277 [2024-09-28 01:39:29.960555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.277 [2024-09-28 01:39:29.960615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:34.277 [2024-09-28 01:39:29.960629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.277 [2024-09-28 01:39:29.960638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.277 [2024-09-28 01:39:30.030617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.277 [2024-09-28 01:39:30.030671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:34.277 [2024-09-28 01:39:30.030689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.277 [2024-09-28 01:39:30.030698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.277 [2024-09-28 01:39:30.030780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.277 [2024-09-28 01:39:30.030790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:34.277 [2024-09-28 01:39:30.030799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.277 [2024-09-28 01:39:30.030813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.277 [2024-09-28 01:39:30.030853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.277 [2024-09-28 01:39:30.030863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:34.277 [2024-09-28 01:39:30.030871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.277 [2024-09-28 01:39:30.030880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.277 [2024-09-28 01:39:30.030960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.277 [2024-09-28 01:39:30.030971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:34.277 [2024-09-28 01:39:30.030979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.278 [2024-09-28 01:39:30.030986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.278 [2024-09-28 01:39:30.031013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.278 [2024-09-28 01:39:30.031022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:34.278 [2024-09-28 01:39:30.031030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.278 [2024-09-28 01:39:30.031038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.278 [2024-09-28 01:39:30.031080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.278 [2024-09-28 01:39:30.031089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:34.278 [2024-09-28 01:39:30.031097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.278 [2024-09-28 01:39:30.031104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.278 [2024-09-28 01:39:30.031150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.278 [2024-09-28 01:39:30.031159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:34.278 [2024-09-28 01:39:30.031168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.278 [2024-09-28 01:39:30.031175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.278 [2024-09-28 01:39:30.031339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 211.738 ms, result 0 00:26:35.221 00:26:35.221 00:26:35.221 01:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:37.771 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:37.771 Process with pid 78455 is not found 00:26:37.771 Remove shared memory files 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 78455 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78455 ']' 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78455 00:26:37.771 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78455) - No such process 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 78455 is not found' 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_band_md /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_l2p_l1 /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_l2p_l2 /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_l2p_l2_ctx /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_nvc_md /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_p2l_pool /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_sb /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_sb_shm /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_trim_bitmap /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_trim_log /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_trim_md /dev/hugepages/ftl_a25e5fd9-192f-4882-b55b-cf92b921114e_vmap 00:26:37.771 01:39:33 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:26:37.772 01:39:33 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:37.772 01:39:33 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:26:37.772 00:26:37.772 real 4m3.344s 00:26:37.772 user 3m51.661s 00:26:37.772 sys 0m11.538s 00:26:37.772 01:39:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.772 ************************************ 00:26:37.772 END TEST ftl_restore_fast 00:26:37.772 ************************************ 00:26:37.772 01:39:33 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:37.772 01:39:33 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:26:37.772 01:39:33 ftl -- ftl/ftl.sh@14 -- # killprocess 72595 00:26:37.772 01:39:33 ftl -- common/autotest_common.sh@950 -- # '[' -z 72595 ']' 00:26:37.772 01:39:33 ftl -- common/autotest_common.sh@954 -- # kill -0 72595 00:26:37.772 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72595) - No such process 00:26:37.772 Process with pid 72595 is not found 00:26:37.772 01:39:33 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72595 is not found' 00:26:37.772 01:39:33 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:26:37.772 01:39:33 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=80974 00:26:37.772 01:39:33 ftl -- ftl/ftl.sh@20 -- # waitforlisten 80974 00:26:37.772 01:39:33 ftl -- common/autotest_common.sh@831 -- # '[' -z 80974 ']' 00:26:37.772 01:39:33 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.772 01:39:33 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:37.772 01:39:33 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:37.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.772 01:39:33 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.772 01:39:33 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:37.772 01:39:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:37.772 [2024-09-28 01:39:33.390644] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:37.772 [2024-09-28 01:39:33.390771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80974 ] 00:26:37.772 [2024-09-28 01:39:33.542108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.033 [2024-09-28 01:39:33.770589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.605 01:39:34 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:38.605 01:39:34 ftl -- common/autotest_common.sh@864 -- # return 0 00:26:38.605 01:39:34 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:38.865 nvme0n1 00:26:38.865 01:39:34 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:26:38.865 01:39:34 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:38.865 01:39:34 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:39.126 01:39:34 ftl -- ftl/common.sh@28 -- # stores=cfbf7c05-ab70-41fc-9b4c-8633e89a3b56 00:26:39.126 01:39:34 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:26:39.126 01:39:34 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cfbf7c05-ab70-41fc-9b4c-8633e89a3b56 00:26:39.388 01:39:35 ftl -- ftl/ftl.sh@23 -- # killprocess 80974 00:26:39.388 01:39:35 ftl -- common/autotest_common.sh@950 -- # '[' -z 80974 ']' 00:26:39.388 01:39:35 ftl -- common/autotest_common.sh@954 -- # kill -0 80974 00:26:39.388 01:39:35 ftl -- common/autotest_common.sh@955 -- # uname 00:26:39.388 01:39:35 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:39.388 01:39:35 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80974 00:26:39.388 01:39:35 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:39.388 killing process with pid 80974 00:26:39.388 01:39:35 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:39.388 01:39:35 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80974' 00:26:39.388 01:39:35 ftl -- common/autotest_common.sh@969 -- # kill 80974 00:26:39.388 01:39:35 ftl -- common/autotest_common.sh@974 -- # wait 80974 00:26:40.767 01:39:36 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:41.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:41.027 Waiting for block devices as requested 00:26:41.027 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:41.286 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:41.286 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:26:41.286 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:26:46.570 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:26:46.570 Remove shared memory files 00:26:46.570 01:39:42 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:26:46.570 01:39:42 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:46.570 01:39:42 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:26:46.570 01:39:42 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:26:46.570 01:39:42 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:26:46.570 01:39:42 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:46.570 01:39:42 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:26:46.570 ************************************ 00:26:46.570 END TEST ftl 00:26:46.570 ************************************ 00:26:46.570 00:26:46.570 real 12m12.749s 00:26:46.570 user 14m8.402s 00:26:46.570 sys 1m13.319s 00:26:46.570 01:39:42 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:46.570 01:39:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:46.570 01:39:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:46.570 01:39:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:46.570 01:39:42 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:46.570 01:39:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:46.570 01:39:42 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:26:46.570 01:39:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:46.570 01:39:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:46.570 01:39:42 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:26:46.570 01:39:42 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:26:46.570 01:39:42 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:26:46.570 01:39:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:46.570 01:39:42 -- common/autotest_common.sh@10 -- # set +x 00:26:46.570 01:39:42 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:26:46.570 01:39:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:26:46.570 01:39:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:26:46.570 01:39:42 -- common/autotest_common.sh@10 -- # set +x 00:26:47.956 INFO: APP EXITING 00:26:47.956 INFO: killing all VMs 00:26:47.956 INFO: killing vhost app 00:26:47.956 INFO: EXIT DONE 00:26:48.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:48.791 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:26:48.791 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:26:48.791 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:26:48.791 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:26:49.051 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:49.625 Cleaning 00:26:49.625 Removing: /var/run/dpdk/spdk0/config 00:26:49.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:49.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:49.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:49.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:49.625 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:49.625 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:49.625 Removing: /var/run/dpdk/spdk0 00:26:49.625 Removing: /var/run/dpdk/spdk_pid57271 00:26:49.625 Removing: /var/run/dpdk/spdk_pid57462 00:26:49.625 Removing: /var/run/dpdk/spdk_pid57674 00:26:49.625 Removing: /var/run/dpdk/spdk_pid57767 00:26:49.625 Removing: /var/run/dpdk/spdk_pid57807 00:26:49.625 Removing: /var/run/dpdk/spdk_pid57929 00:26:49.625 Removing: /var/run/dpdk/spdk_pid57947 00:26:49.625 Removing: /var/run/dpdk/spdk_pid58141 00:26:49.625 Removing: /var/run/dpdk/spdk_pid58234 00:26:49.625 Removing: /var/run/dpdk/spdk_pid58324 00:26:49.625 Removing: /var/run/dpdk/spdk_pid58430 00:26:49.625 Removing: /var/run/dpdk/spdk_pid58516 00:26:49.625 Removing: /var/run/dpdk/spdk_pid58561 00:26:49.625 Removing: /var/run/dpdk/spdk_pid58592 00:26:49.625 Removing: /var/run/dpdk/spdk_pid58668 00:26:49.625 Removing: /var/run/dpdk/spdk_pid58779 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59211 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59264 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59327 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59342 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59434 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59450 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59551 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59563 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59621 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59634 00:26:49.625 Removing: /var/run/dpdk/spdk_pid59687 00:26:49.626 Removing: /var/run/dpdk/spdk_pid59705 00:26:49.626 Removing: /var/run/dpdk/spdk_pid59859 00:26:49.626 Removing: /var/run/dpdk/spdk_pid59896 00:26:49.626 Removing: /var/run/dpdk/spdk_pid59985 00:26:49.626 Removing: /var/run/dpdk/spdk_pid60157 00:26:49.626 Removing: /var/run/dpdk/spdk_pid60241 00:26:49.626 Removing: /var/run/dpdk/spdk_pid60283 00:26:49.626 Removing: /var/run/dpdk/spdk_pid60715 00:26:49.626 Removing: /var/run/dpdk/spdk_pid60813 00:26:49.626 Removing: /var/run/dpdk/spdk_pid60922 00:26:49.626 Removing: /var/run/dpdk/spdk_pid60977 00:26:49.626 Removing: /var/run/dpdk/spdk_pid61008 00:26:49.626 Removing: /var/run/dpdk/spdk_pid61092 00:26:49.626 Removing: /var/run/dpdk/spdk_pid61711 00:26:49.626 Removing: /var/run/dpdk/spdk_pid61748 00:26:49.626 Removing: /var/run/dpdk/spdk_pid62214 00:26:49.626 Removing: /var/run/dpdk/spdk_pid62312 00:26:49.626 Removing: /var/run/dpdk/spdk_pid62429 00:26:49.626 Removing: /var/run/dpdk/spdk_pid62482 00:26:49.626 Removing: /var/run/dpdk/spdk_pid62513 00:26:49.626 Removing: /var/run/dpdk/spdk_pid62544 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64379 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64511 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64520 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64532 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64573 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64577 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64589 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64634 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64638 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64650 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64695 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64699 00:26:49.626 Removing: /var/run/dpdk/spdk_pid64711 00:26:49.626 Removing: /var/run/dpdk/spdk_pid66070 00:26:49.626 Removing: /var/run/dpdk/spdk_pid66167 00:26:49.626 Removing: /var/run/dpdk/spdk_pid67568 00:26:49.626 Removing: /var/run/dpdk/spdk_pid68951 00:26:49.626 Removing: /var/run/dpdk/spdk_pid69035 00:26:49.626 Removing: /var/run/dpdk/spdk_pid69112 00:26:49.626 Removing: /var/run/dpdk/spdk_pid69187 00:26:49.626 Removing: /var/run/dpdk/spdk_pid69293 00:26:49.626 Removing: /var/run/dpdk/spdk_pid69367 00:26:49.626 Removing: /var/run/dpdk/spdk_pid69509 00:26:49.626 Removing: /var/run/dpdk/spdk_pid69863 00:26:49.626 Removing: /var/run/dpdk/spdk_pid69899 00:26:49.626 Removing: /var/run/dpdk/spdk_pid70346 00:26:49.626 Removing: /var/run/dpdk/spdk_pid70526 00:26:49.626 Removing: /var/run/dpdk/spdk_pid70631 00:26:49.626 Removing: /var/run/dpdk/spdk_pid70741 00:26:49.626 Removing: /var/run/dpdk/spdk_pid70794 00:26:49.626 Removing: /var/run/dpdk/spdk_pid70824 00:26:49.626 Removing: /var/run/dpdk/spdk_pid71116 00:26:49.626 Removing: /var/run/dpdk/spdk_pid71170 00:26:49.626 Removing: /var/run/dpdk/spdk_pid71248 00:26:49.626 Removing: /var/run/dpdk/spdk_pid71644 00:26:49.626 Removing: /var/run/dpdk/spdk_pid71789 00:26:49.626 Removing: /var/run/dpdk/spdk_pid72595 00:26:49.626 Removing: /var/run/dpdk/spdk_pid72732 00:26:49.626 Removing: /var/run/dpdk/spdk_pid72897 00:26:49.626 Removing: /var/run/dpdk/spdk_pid73005 00:26:49.626 Removing: /var/run/dpdk/spdk_pid73302 00:26:49.626 Removing: /var/run/dpdk/spdk_pid73561 00:26:49.626 Removing: /var/run/dpdk/spdk_pid73914 00:26:49.626 Removing: /var/run/dpdk/spdk_pid74097 00:26:49.626 Removing: /var/run/dpdk/spdk_pid74208 00:26:49.626 Removing: /var/run/dpdk/spdk_pid74255 00:26:49.626 Removing: /var/run/dpdk/spdk_pid74344 00:26:49.626 Removing: /var/run/dpdk/spdk_pid74379 00:26:49.626 Removing: /var/run/dpdk/spdk_pid74426 00:26:49.626 Removing: /var/run/dpdk/spdk_pid74585 00:26:49.626 Removing: /var/run/dpdk/spdk_pid74799 00:26:49.626 Removing: /var/run/dpdk/spdk_pid75050 00:26:49.626 Removing: /var/run/dpdk/spdk_pid75325 00:26:49.626 Removing: /var/run/dpdk/spdk_pid75576 00:26:49.626 Removing: /var/run/dpdk/spdk_pid75919 00:26:49.626 Removing: /var/run/dpdk/spdk_pid76040 00:26:49.626 Removing: /var/run/dpdk/spdk_pid76128 00:26:49.626 Removing: /var/run/dpdk/spdk_pid76496 00:26:49.626 Removing: /var/run/dpdk/spdk_pid76555 00:26:49.626 Removing: /var/run/dpdk/spdk_pid76875 00:26:49.888 Removing: /var/run/dpdk/spdk_pid77151 00:26:49.888 Removing: /var/run/dpdk/spdk_pid77502 00:26:49.888 Removing: /var/run/dpdk/spdk_pid77613 00:26:49.888 Removing: /var/run/dpdk/spdk_pid77656 00:26:49.888 Removing: /var/run/dpdk/spdk_pid77709 00:26:49.888 Removing: /var/run/dpdk/spdk_pid77761 00:26:49.888 Removing: /var/run/dpdk/spdk_pid77819 00:26:49.888 Removing: /var/run/dpdk/spdk_pid78032 00:26:49.888 Removing: /var/run/dpdk/spdk_pid78093 00:26:49.888 Removing: /var/run/dpdk/spdk_pid78149 00:26:49.888 Removing: /var/run/dpdk/spdk_pid78209 00:26:49.888 Removing: /var/run/dpdk/spdk_pid78245 00:26:49.888 Removing: /var/run/dpdk/spdk_pid78305 00:26:49.888 Removing: /var/run/dpdk/spdk_pid78455 00:26:49.888 Removing: /var/run/dpdk/spdk_pid78661 00:26:49.888 Removing: /var/run/dpdk/spdk_pid79091 00:26:49.888 Removing: /var/run/dpdk/spdk_pid79758 00:26:49.888 Removing: /var/run/dpdk/spdk_pid80245 00:26:49.888 Removing: /var/run/dpdk/spdk_pid80974 00:26:49.888 Clean 00:26:49.888 01:39:45 -- common/autotest_common.sh@1451 -- # return 0 00:26:49.888 01:39:45 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:26:49.888 01:39:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.888 01:39:45 -- common/autotest_common.sh@10 -- # set +x 00:26:49.888 01:39:45 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:26:49.888 01:39:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.888 01:39:45 -- common/autotest_common.sh@10 -- # set +x 00:26:49.888 01:39:45 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:49.888 01:39:45 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:49.888 01:39:45 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:49.888 01:39:45 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:26:49.888 01:39:45 -- spdk/autotest.sh@394 -- # hostname 00:26:49.888 01:39:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:50.148 geninfo: WARNING: invalid characters removed from testname! 00:27:16.735 01:40:10 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:18.654 01:40:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:21.206 01:40:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:24.513 01:40:19 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:27.145 01:40:22 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:29.693 01:40:25 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:32.238 01:40:28 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:32.500 01:40:28 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:27:32.500 01:40:28 -- common/autotest_common.sh@1681 -- $ lcov --version 00:27:32.500 01:40:28 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:27:32.500 01:40:28 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:27:32.500 01:40:28 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:27:32.500 01:40:28 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:27:32.500 01:40:28 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:27:32.500 01:40:28 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:32.500 01:40:28 -- scripts/common.sh@336 -- $ read -ra ver1 00:27:32.500 01:40:28 -- scripts/common.sh@337 -- $ IFS=.-: 00:27:32.500 01:40:28 -- scripts/common.sh@337 -- $ read -ra ver2 00:27:32.500 01:40:28 -- scripts/common.sh@338 -- $ local 'op=<' 00:27:32.500 01:40:28 -- scripts/common.sh@340 -- $ ver1_l=2 00:27:32.500 01:40:28 -- scripts/common.sh@341 -- $ ver2_l=1 00:27:32.500 01:40:28 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:27:32.500 01:40:28 -- scripts/common.sh@344 -- $ case "$op" in 00:27:32.500 01:40:28 -- scripts/common.sh@345 -- $ : 1 00:27:32.500 01:40:28 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:27:32.500 01:40:28 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:32.500 01:40:28 -- scripts/common.sh@365 -- $ decimal 1 00:27:32.500 01:40:28 -- scripts/common.sh@353 -- $ local d=1 00:27:32.500 01:40:28 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:32.500 01:40:28 -- scripts/common.sh@355 -- $ echo 1 00:27:32.500 01:40:28 -- scripts/common.sh@365 -- $ ver1[v]=1 00:27:32.500 01:40:28 -- scripts/common.sh@366 -- $ decimal 2 00:27:32.500 01:40:28 -- scripts/common.sh@353 -- $ local d=2 00:27:32.500 01:40:28 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:32.500 01:40:28 -- scripts/common.sh@355 -- $ echo 2 00:27:32.500 01:40:28 -- scripts/common.sh@366 -- $ ver2[v]=2 00:27:32.500 01:40:28 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:27:32.500 01:40:28 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:27:32.500 01:40:28 -- scripts/common.sh@368 -- $ return 0 00:27:32.500 01:40:28 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:32.500 01:40:28 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:27:32.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.500 --rc genhtml_branch_coverage=1 00:27:32.500 --rc genhtml_function_coverage=1 00:27:32.500 --rc genhtml_legend=1 00:27:32.500 --rc geninfo_all_blocks=1 00:27:32.500 --rc geninfo_unexecuted_blocks=1 00:27:32.500 00:27:32.500 ' 00:27:32.500 01:40:28 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:27:32.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.500 --rc genhtml_branch_coverage=1 00:27:32.500 --rc genhtml_function_coverage=1 00:27:32.500 --rc genhtml_legend=1 00:27:32.500 --rc geninfo_all_blocks=1 00:27:32.500 --rc geninfo_unexecuted_blocks=1 00:27:32.500 00:27:32.500 ' 00:27:32.501 01:40:28 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:27:32.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.501 --rc genhtml_branch_coverage=1 00:27:32.501 --rc genhtml_function_coverage=1 00:27:32.501 --rc genhtml_legend=1 00:27:32.501 --rc geninfo_all_blocks=1 00:27:32.501 --rc geninfo_unexecuted_blocks=1 00:27:32.501 00:27:32.501 ' 00:27:32.501 01:40:28 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:27:32.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.501 --rc genhtml_branch_coverage=1 00:27:32.501 --rc genhtml_function_coverage=1 00:27:32.501 --rc genhtml_legend=1 00:27:32.501 --rc geninfo_all_blocks=1 00:27:32.501 --rc geninfo_unexecuted_blocks=1 00:27:32.501 00:27:32.501 ' 00:27:32.501 01:40:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:32.501 01:40:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:27:32.501 01:40:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:32.501 01:40:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.501 01:40:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.501 01:40:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.501 01:40:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.501 01:40:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.501 01:40:28 -- paths/export.sh@5 -- $ export PATH 00:27:32.501 01:40:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.501 01:40:28 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:32.501 01:40:28 -- common/autobuild_common.sh@479 -- $ date +%s 00:27:32.501 01:40:28 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727487628.XXXXXX 00:27:32.501 01:40:28 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727487628.xxbLfw 00:27:32.501 01:40:28 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:27:32.501 01:40:28 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:27:32.501 01:40:28 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:32.501 01:40:28 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:32.501 01:40:28 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:32.501 01:40:28 -- common/autobuild_common.sh@495 -- $ get_config_params 00:27:32.501 01:40:28 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:27:32.501 01:40:28 -- common/autotest_common.sh@10 -- $ set +x 00:27:32.501 01:40:28 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:27:32.501 01:40:28 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:27:32.501 01:40:28 -- pm/common@17 -- $ local monitor 00:27:32.501 01:40:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:32.501 01:40:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:32.501 01:40:28 -- pm/common@25 -- $ sleep 1 00:27:32.501 01:40:28 -- pm/common@21 -- $ date +%s 00:27:32.501 01:40:28 -- pm/common@21 -- $ date +%s 00:27:32.501 01:40:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727487628 00:27:32.501 01:40:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727487628 00:27:32.501 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727487628_collect-cpu-load.pm.log 00:27:32.501 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727487628_collect-vmstat.pm.log 00:27:33.456 01:40:29 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:27:33.456 01:40:29 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:27:33.456 01:40:29 -- spdk/autopackage.sh@14 -- $ timing_finish 00:27:33.456 01:40:29 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:33.456 01:40:29 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:33.456 01:40:29 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:33.718 01:40:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:33.718 01:40:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:27:33.718 01:40:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:27:33.718 01:40:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:33.718 01:40:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:27:33.718 01:40:29 -- pm/common@44 -- $ pid=82675 00:27:33.718 01:40:29 -- pm/common@50 -- $ kill -TERM 82675 00:27:33.718 01:40:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:33.718 01:40:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:27:33.718 01:40:29 -- pm/common@44 -- $ pid=82676 00:27:33.718 01:40:29 -- pm/common@50 -- $ kill -TERM 82676 00:27:33.718 + [[ -n 5030 ]] 00:27:33.718 + sudo kill 5030 00:27:33.731 [Pipeline] } 00:27:33.742 [Pipeline] // timeout 00:27:33.746 [Pipeline] } 00:27:33.758 [Pipeline] // stage 00:27:33.763 [Pipeline] } 00:27:33.775 [Pipeline] // catchError 00:27:33.782 [Pipeline] stage 00:27:33.784 [Pipeline] { (Stop VM) 00:27:33.795 [Pipeline] sh 00:27:34.077 + vagrant halt 00:27:36.628 ==> default: Halting domain... 00:27:43.237 [Pipeline] sh 00:27:43.522 + vagrant destroy -f 00:27:46.070 ==> default: Removing domain... 00:27:46.652 [Pipeline] sh 00:27:46.933 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:27:46.941 [Pipeline] } 00:27:46.950 [Pipeline] // stage 00:27:46.954 [Pipeline] } 00:27:46.962 [Pipeline] // dir 00:27:46.965 [Pipeline] } 00:27:46.974 [Pipeline] // wrap 00:27:46.978 [Pipeline] } 00:27:46.985 [Pipeline] // catchError 00:27:46.991 [Pipeline] stage 00:27:46.992 [Pipeline] { (Epilogue) 00:27:47.002 [Pipeline] sh 00:27:47.283 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:52.565 [Pipeline] catchError 00:27:52.567 [Pipeline] { 00:27:52.582 [Pipeline] sh 00:27:52.868 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:52.869 Artifacts sizes are good 00:27:52.879 [Pipeline] } 00:27:52.895 [Pipeline] // catchError 00:27:52.908 [Pipeline] archiveArtifacts 00:27:52.917 Archiving artifacts 00:27:53.081 [Pipeline] cleanWs 00:27:53.091 [WS-CLEANUP] Deleting project workspace... 00:27:53.091 [WS-CLEANUP] Deferred wipeout is used... 00:27:53.100 [WS-CLEANUP] done 00:27:53.102 [Pipeline] } 00:27:53.114 [Pipeline] // stage 00:27:53.118 [Pipeline] } 00:27:53.129 [Pipeline] // node 00:27:53.132 [Pipeline] End of Pipeline 00:27:53.170 Finished: SUCCESS